modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
jieliu/Storm-7B
jieliu
2024-06-18T02:35:57Z
19
41
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "storm", "openchat", "RLAIF", "reward model", "conversational", "en", "dataset:berkeley-nest/Nectar", "arxiv:2406.11817", "arxiv:2310.03708", "base_model:openchat/openchat-3.5-0106", "base_model:finetune:openchat/openchat-3.5-0106", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-25T12:46:29Z
--- license: apache-2.0 library_name: transformers tags: - storm - mistral - openchat - RLAIF - reward model language: - en base_model: openchat/openchat-3.5-0106 datasets: - berkeley-nest/Nectar --- # Storm-7B - **Developed by**: [Jie Liu](https://jieliu.site/) \\(^{*1,2}\\), [Zhanhui Zhou](https://scholar.google.com/citations?user=SbACfYQAAAAJ&hl=zh-CN) \\(^{*2}\\), [Jiaheng Liu](https://liujiaheng.github.io/) \\(^{2}\\), [Xingyuan Bu](https://scholar.google.com.hk/citations?user=cqYaRhUAAAAJ&hl=zh-CN) \\(^{2}\\), [Chao Yang](https://scholar.google.com/citations?user=5KRbHPMAAAAJ&hl=zh-CN) \\(^{2}\\), [Han-Sen Zhong](https://scholar.google.com.hk/citations?user=X_ZfX8sAAAAJ&hl=zh-CN) \\(^{\dag 2}\\), [Wanli Ouyang](https://wlouyang.github.io/) \\(^{1,2}\\). - \\(^{1}\\)MMLab, The Chinese University of Hong Kong &ensp; \\(^{2}\\)Shanghai AI Laboratory - Paper: [Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level](https://arxiv.org/pdf/2406.11817) - Finetuned from the model: [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) - Dataset: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - Reward Model: [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) Please see our paper for more details. ## Introduction We released Storm-7B, the first open-source language model comparable to the GPT-4 series on the [AlpacaEval 2.0](https://tatsu-lab.github.io/alpaca_eval/) leaderboard. Recent studies show that DPO benefits from iterative training with online preferences labeled by a trained reward model. In this work, we identify a pitfall of vanilla iterative DPO - improved response quality can lead to increased verbosity. To address this, we introduce iterative length-regularized DPO (iLR-DPO) to penalize response length. Our empirical results show that iLR-DPO can enhance a 7B model to perform on par with GPT-4 **without increasing verbosity**. ## Performance Our 7B model achieves a **50.5%** length-controlled win rate against GPT-4 Preview on AlpacaEval 2.0. <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/Tj_a1QntAxkhy2SXbOdmT.png" width="60%"> </p> Our model's LC win rate improves over iterations without significantly changing the response length, indicating better alignment with human values without length bias. The final trained model (iteration 3) achieves a 50.5% LC win rate, making it the first open-source model to surpass the baseline model GPT-4 Preview. In addition to regular decoding, we also test beam search and best-of-n sampling on top of our trained model. Beam search over our trained model shows a 5% improvement over regular decoding, Best-of-n sampling with Starling-RM-34B achieves 61.6% LC Win rate and outperforms GPT-4 Omni. <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/GGa28vaREaVq099MPdqcP.png" width="100%"> </p> We observe no significant degradation in traditional NLP tasks from the Huggingface Open LLM Leaderboard. <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/8KEm_Ladg7Kqko8mC63SN.png" width="100%"> </p> ## Uses Our model uses the same chat template as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). A sample code snippet for inference using our model is provided below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("jieliu/Storm-7B").to(device) tokenizer = AutoTokenizer.from_pretrained("jieliu/Storm-7B") model.eval().requires_grad_(False) def generate_response(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) outputs = model.generate( input_ids, max_length=2048, do_sample=True, temperature=1.0, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) response_ids = outputs[0] response_text = tokenizer.decode(response_ids, skip_special_tokens=True) return response_text prompt = "How does a telescope work?" input_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(input_prompt) print("Response:", response_text) ``` ## Scripts You can reproduce our results on AlphaEval 2.0 using the script provided below. ```bash git clone https://github.com/tatsu-lab/alpaca_eval.git cd alpaca_eval pip install -e . export OPENAI_API_KEY=<your_api_key> alpaca_eval evaluate_from_model --model_configs 'Storm-7B' ``` ## Limitations Our work has several limitations: (1) We focus on aligning with human preferences but only use GPT-4 as a proxy for human judgment to evaluate language models. (2) We reduce verbosity with a length penalty, though verbosity and length are not necessarily correlated. Future work could train a specific reward model to directly penalize verbosity, replacing the length margin with a verbosity margin, following the standard [MODPO pipeline](https://github.com/ZHZisZZ/modpo). ## Citation ``` @article{liu2024iterative, title = {Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level}, author = {Liu, Jie and Zhou, Zhanhui and Liu, Jiaheng and Bu, Xingyuan and Yang, Chao and Zhong Han-Sen and Ouyang, Wanli}, journal={arXiv preprint arXiv:2406.11817}, year={2024} } @article{zhou2023beyond, title={Beyond one-preference-for-all: Multi-objective direct preference optimization}, author={Zhou, Zhanhui and Liu, Jie and Yang, Chao and Shao, Jing and Liu, Yu and Yue, Xiangyu and Ouyang, Wanli and Qiao, Yu}, journal={arXiv preprint arXiv:2310.03708}, year={2023} } ```
hbin0701/mistral_ultrafeedback_all
hbin0701
2024-06-18T02:26:00Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2024-06-18T01:54:19Z
--- license: apache-2.0 ---
MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF
MaziyarPanahi
2024-06-18T02:21:38Z
6
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-rfdxiqs", "base_model:quantized:mergekit-community/mergekit-slerp-rfdxiqs" ]
text-generation
2024-06-18T01:58:12Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:NousResearch/Hermes-2-Pro-Mistral-7B - base_model:WizardLM/WizardMath-7B-V1.1 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-rfdxiqs-GGUF base_model: mergekit-community/mergekit-slerp-rfdxiqs inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-rfdxiqs](https://huggingface.co/mergekit-community/mergekit-slerp-rfdxiqs) ## Description [MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-rfdxiqs](https://huggingface.co/mergekit-community/mergekit-slerp-rfdxiqs). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Ganny/llama38binstruct_summarize
Ganny
2024-06-18T02:19:43Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-06-18T02:19:25Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.8179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4723 | 1.25 | 25 | 1.2784 | | 0.4521 | 2.5 | 50 | 1.5971 | | 0.2549 | 3.75 | 75 | 1.6460 | | 0.1039 | 5.0 | 100 | 1.8179 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
overfly83/llama2-7b-hf-adapter
overfly83
2024-06-18T02:17:06Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-05T05:40:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
user10101/model
user10101
2024-06-18T02:16:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-06-18T02:16:52Z
--- license: apache-2.0 ---
EmineYoubah/finetunedllama3Technix
EmineYoubah
2024-06-18T02:15:30Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-18T02:15:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** EmineYoubah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jiwonii97/Llama-atalk-jw-Ko-3-8B-v1
jiwonii97
2024-06-18T02:07:34Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-18T02:03:26Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lielbin/BabyBERTa-french1.25M-Masking-finetuned-squad
lielbin
2024-06-18T02:07:13Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-18T01:31:45Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french1.25M-Masking-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french1.25M-Masking-finetuned-squad This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
jsfs11/L3-8B-Stheno-slerp
jsfs11
2024-06-18T02:02:55Z
5
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Sao10K/L3-8B-Stheno-v3.2", "Sao10K/L3-8B-Stheno-v3.1", "conversational", "base_model:Sao10K/L3-8B-Stheno-v3.1", "base_model:merge:Sao10K/L3-8B-Stheno-v3.1", "base_model:Sao10K/L3-8B-Stheno-v3.2", "base_model:merge:Sao10K/L3-8B-Stheno-v3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T01:54:49Z
--- base_model: - Sao10K/L3-8B-Stheno-v3.2 - Sao10K/L3-8B-Stheno-v3.1 tags: - merge - mergekit - lazymergekit - Sao10K/L3-8B-Stheno-v3.2 - Sao10K/L3-8B-Stheno-v3.1 --- # L3-8B-Stheno-slerp L3-8B-Stheno-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## 🧩 Configuration ```yaml slices: - sources: - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0, 32] - model: Sao10K/L3-8B-Stheno-v3.1 layer_range: [0, 32] merge_method: slerp base_model: Sao10K/L3-8B-Stheno-v3.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "jsfs11/L3-8B-Stheno-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ayelets/Eitan_dog
ayelets
2024-06-18T02:01:19Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-06-18T02:01:16Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - ayelets/Eitan_dog <Gallery /> ## Model description These are ayelets/Eitan_dog LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](ayelets/Eitan_dog/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
vinaybassa/llama38binstruct_summarize
vinaybassa
2024-06-18T02:00:51Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-06-18T02:00:43Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.5170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4584 | 1.25 | 25 | 1.1473 | | 0.4513 | 2.5 | 50 | 1.4473 | | 0.212 | 3.75 | 75 | 1.4875 | | 0.1193 | 5.0 | 100 | 1.5170 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
hardy99/finetunedllama3_loramodel
hardy99
2024-06-18T01:54:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-18T01:12:48Z
--- title: Llama3finetuned Lora emoji: 💬 colorFrom: yellow colorTo: purple sdk_version: 4.36.1 app_file: app.py pinned: false language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** hardy99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OmnicromsBrain/NeuralStar_Story-9b
OmnicromsBrain
2024-06-18T01:51:51Z
6
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OmnicromsBrain/StoryFusion-7B", "OmnicromsBrain/NeuralStar-7b-Lazy", "conversational", "base_model:OmnicromsBrain/NeuralStar-7b-Lazy", "base_model:merge:OmnicromsBrain/NeuralStar-7b-Lazy", "base_model:OmnicromsBrain/StoryFusion-7B", "base_model:merge:OmnicromsBrain/StoryFusion-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T01:45:15Z
--- base_model: - OmnicromsBrain/StoryFusion-7B - OmnicromsBrain/NeuralStar-7b-Lazy tags: - merge - mergekit - lazymergekit - OmnicromsBrain/StoryFusion-7B - OmnicromsBrain/NeuralStar-7b-Lazy --- # NeuralStar_Story-9b **TESTING** NeuralStar_Story-9b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OmnicromsBrain/StoryFusion-7B](https://huggingface.co/OmnicromsBrain/StoryFusion-7B) * [OmnicromsBrain/NeuralStar-7b-Lazy](https://huggingface.co/OmnicromsBrain/NeuralStar-7b-Lazy) ## 🧩 Configuration ```yaml slices: - sources: - model: OmnicromsBrain/StoryFusion-7B layer_range: [0, 24] - sources: - model: OmnicromsBrain/NeuralStar-7b-Lazy layer_range: [8, 32] merge_method: passthrough dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "OmnicromsBrain/NeuralStar_Story-9b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
adidrv/paligemma-cord-demo
adidrv
2024-06-18T01:48:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-18T01:28:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/mergekit-slerp-vhzhpmg-GGUF
MaziyarPanahi
2024-06-18T01:47:01Z
20
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-vhzhpmg", "base_model:quantized:mergekit-community/mergekit-slerp-vhzhpmg" ]
text-generation
2024-06-18T01:23:54Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02 - base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-vhzhpmg-GGUF base_model: mergekit-community/mergekit-slerp-vhzhpmg inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-vhzhpmg-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-vhzhpmg-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-vhzhpmg](https://huggingface.co/mergekit-community/mergekit-slerp-vhzhpmg) ## Description [MaziyarPanahi/mergekit-slerp-vhzhpmg-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-vhzhpmg-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-vhzhpmg](https://huggingface.co/mergekit-community/mergekit-slerp-vhzhpmg). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
ntviet/whisper-small-co2
ntviet
2024-06-18T01:43:14Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "co", "dataset:ntviet/Co-audio-dataset2", "base_model:ntviet/whisper-small-co", "base_model:finetune:ntviet/whisper-small-co", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-18T00:38:16Z
--- language: - co license: apache-2.0 base_model: ntviet/whisper-small-co tags: - generated_from_trainer datasets: - ntviet/Co-audio-dataset2 model-index: - name: Whisper Small Co 2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Co 2 This model is a fine-tuned version of [ntviet/whisper-small-co](https://huggingface.co/ntviet/whisper-small-co) on the Co audio dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1704 - Cer Ortho: 17.3028 - Cer: 16.8798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 600 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:-------:| | 0.0 | 85.7143 | 600 | 0.1704 | 17.3028 | 16.8798 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
huhuhuhus/google-gemma-2b-1718674987
huhuhuhus
2024-06-18T01:43:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-18T01:43:08Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
huhuhuhus/Qwen-Qwen1.5-1.8B-1718674912
huhuhuhus
2024-06-18T01:41:57Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-18T01:41:52Z
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
MelodyMachine/Deepfake-audio-detection-V2
MelodyMachine
2024-06-18T01:41:13Z
1,487
9
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:motheecreator/Deepfake-audio-detection", "base_model:finetune:motheecreator/Deepfake-audio-detection", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-06-17T17:55:52Z
--- license: apache-2.0 base_model: motheecreator/Deepfake-audio-detection tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: Deepfake-audio-detection-V2 results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9972843305874898 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Deepfake-audio-detection-V2 This model is a fine-tuned version of [motheecreator/Deepfake-audio-detection](https://huggingface.co/motheecreator/Deepfake-audio-detection) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0141 - Accuracy: 0.9973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0503 | 1.0 | 1381 | 0.0514 | 0.9858 | | 0.0327 | 2.0 | 2762 | 0.0174 | 0.9956 | | 0.0064 | 3.0 | 4143 | 0.0221 | 0.9950 | | 0.0003 | 4.0 | 5524 | 0.0174 | 0.9965 | | 0.0115 | 5.0 | 6905 | 0.0141 | 0.9973 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
huhuhuhus/Qwen-Qwen1.5-0.5B-1718674808
huhuhuhus
2024-06-18T01:40:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-18T01:40:08Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
AuraRuby/Taxi-v3
AuraRuby
2024-06-18T01:38:16Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-06-18T01:38:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AuraRuby/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
huhuhuhus/google-gemma-2b-1718674634
huhuhuhus
2024-06-18T01:37:22Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-18T01:37:14Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
huhuhuhus/Qwen-Qwen1.5-0.5B-1718674456
huhuhuhus
2024-06-18T01:34:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-18T01:34:16Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-dpo-step1
Minbyul
2024-06-18T01:32:27Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-sft-step1", "base_model:finetune:Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-sft-step1", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T01:21:50Z
--- license: llama3 base_model: Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-sft-step1 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: llama3-8b-instruct-wo-kqa_golden-iter-dpo-step1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-instruct-wo-kqa_golden-iter-dpo-step1 This model is a fine-tuned version of [Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-sft-step1](https://huggingface.co/Minbyul/llama3-8b-instruct-wo-kqa_golden-iter-sft-step1) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6931 - Rewards/chosen: 0.0 - Rewards/rejected: 0.0 - Rewards/accuracies: 0.0 - Rewards/margins: 0.0 - Logps/rejected: -369.7173 - Logps/chosen: -476.8867 - Logits/rejected: -0.5081 - Logits/chosen: -0.6523 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
lielbin/BabyBERTa-french1.25M-Masking-finetuned-qasrl
lielbin
2024-06-18T01:30:57Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-18T01:03:59Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french1.25M-Masking-finetuned-qasrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french1.25M-Masking-finetuned-qasrl This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
dmo0798/trained_dilibert_sentiment_analysis
dmo0798
2024-06-18T01:26:21Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-14T03:20:05Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_trainer metrics: - accuracy model-index: - name: trained_dilibert_sentiment_analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_dilibert_sentiment_analysis This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3948 - Accuracy: 0.906 - Confusion Matrix: [[174, 46], [48, 732]] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Confusion Matrix | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------------------:| | No log | 1.0 | 188 | 0.2507 | 0.905 | [[168, 52], [43, 737]] | | No log | 2.0 | 376 | 0.2797 | 0.904 | [[172, 48], [48, 732]] | | 0.2241 | 3.0 | 564 | 0.3635 | 0.906 | [[154, 66], [28, 752]] | | 0.2241 | 4.0 | 752 | 0.3798 | 0.908 | [[171, 49], [43, 737]] | | 0.2241 | 5.0 | 940 | 0.3948 | 0.906 | [[174, 46], [48, 732]] | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Vicman229/distilbert-base-uncased-finetuned-sst-2-english-tuning-amazon-baby-5000
Vicman229
2024-06-18T01:17:24Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-16T23:26:05Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst-2-english-tuning-amazon-baby-5000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst-2-english-tuning-amazon-baby-5000 This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0046 - Accuracy: 0.998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
rtorresb/mi-super-modelo
rtorresb
2024-06-18T01:15:37Z
184
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-18T00:53:31Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: mi-super-modelo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7744 - Accuracy: 0.1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7125 | 1.0 | 5 | 1.7744 | 0.1 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
mradermacher/Hercules-Stheno-v1-GGUF
mradermacher
2024-06-18T01:15:15Z
86
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:lik07/Hercules-Stheno-v1", "base_model:quantized:lik07/Hercules-Stheno-v1", "endpoints_compatible", "region:us" ]
null
2024-06-18T00:05:59Z
--- base_model: lik07/Hercules-Stheno-v1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/lik07/Hercules-Stheno-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hercules-Stheno-v1-GGUF/resolve/main/Hercules-Stheno-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
chainup244/Qwen-Qwen1.5-0.5B-1718673245
chainup244
2024-06-18T01:14:39Z
152
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T01:14:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Vicman229/tmp_trainer
Vicman229
2024-06-18T01:11:25Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-16T23:15:57Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_trainer metrics: - accuracy model-index: - name: tmp_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_trainer This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5925 - Accuracy: 0.892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
MadameMoonflower/CitrusTea-Test
MadameMoonflower
2024-06-18T01:06:04Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:grimjim/kukulemon-7B", "base_model:merge:grimjim/kukulemon-7B", "base_model:matchaaaaa/Chaifighter-20B-v2", "base_model:merge:matchaaaaa/Chaifighter-20B-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T01:01:41Z
--- base_model: - matchaaaaa/Chaifighter-20B-v2 - grimjim/kukulemon-7B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [matchaaaaa/Chaifighter-20B-v2](https://huggingface.co/matchaaaaa/Chaifighter-20B-v2) * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: grimjim/kukulemon-7B layer_range: [0, 24] - sources: - model: matchaaaaa/Chaifighter-20B-v2 layer_range: [18, 40] merge_method: passthrough dtype: float16 ```
richardkelly/google-gemma-2b-1718672731
richardkelly
2024-06-18T01:05:51Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-18T01:05:31Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step3
Minbyul
2024-06-18T01:05:41Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2", "base_model:finetune:Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T04:04:58Z
--- license: apache-2.0 base_model: Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: biomistral-7b-wo-kqa_golden-iter-dpo-step3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-wo-kqa_golden-iter-dpo-step3 This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2](https://huggingface.co/Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6914 - Rewards/chosen: 0.0080 - Rewards/rejected: 0.0043 - Rewards/accuracies: 0.6964 - Rewards/margins: 0.0037 - Logps/rejected: -164.6167 - Logps/chosen: -234.3960 - Logits/rejected: -2.1831 - Logits/chosen: -2.2946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
richardkelly/Qwen-Qwen1.5-7B-1718672669
richardkelly
2024-06-18T01:04:39Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-18T01:04:29Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
lielbin/BabyBERTa-french1.25M-Masking-finetuned-qamr
lielbin
2024-06-18T01:03:29Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-18T00:56:31Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french1.25M-Masking-finetuned-qamr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french1.25M-Masking-finetuned-qamr This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
tomg-group-umd/GenQA-math-llama-3
tomg-group-umd
2024-06-18T00:56:33Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T23:19:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oscard14/pca_interpretations_contextualizer_falcon_7b_V3
oscard14
2024-06-18T00:52:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-18T00:52:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/AmberChat-GGUF
mradermacher
2024-06-18T00:48:32Z
61
0
transformers
[ "transformers", "gguf", "nlp", "llm", "en", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:icybee/share_gpt_90k_v1", "base_model:LLM360/AmberChat", "base_model:quantized:LLM360/AmberChat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-17T22:53:51Z
--- base_model: LLM360/AmberChat datasets: - WizardLM/WizardLM_evol_instruct_V2_196k - icybee/share_gpt_90k_v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - nlp - llm --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LLM360/AmberChat <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/AmberChat-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AmberChat-GGUF/resolve/main/AmberChat.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
richardkelly/Qwen-Qwen1.5-0.5B-1718671500
richardkelly
2024-06-18T00:45:06Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-18T00:45:00Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
alivi/fine-tuning_Zephyr-7b_SpanishQA
alivi
2024-06-18T00:44:45Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "es", "dataset:alivi/QASpanish", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-16T19:10:03Z
--- language: - en - es library_name: transformers datasets: - alivi/QASpanish pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
T3Q-LLM/T3Q-LLM-TE-NLI-STS-v1.0
T3Q-LLM
2024-06-18T00:40:00Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-14T04:39:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## How to Get Started with the Model ## Evaluation hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM-TE-NLI-STS-v1.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9509|± |0.0058| | | |macro_f1|0.9508|± |0.0058| |kobest_copa | 0|acc |0.7860|± |0.0130| | | |macro_f1|0.7858|± |0.0130| |kobest_hellaswag| 0|acc |0.5200|± |0.0224| | | |acc_norm|0.5360|± |0.0223| | | |macro_f1|0.5172|± |0.0223| |kobest_sentineg | 0|acc |0.8791|± |0.0164| | | |macro_f1|0.8787|± |0.0164|
dd3434/distilbert-base-uncased-finetuned-emotion
dd3434
2024-06-18T00:34:57Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-17T23:39:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9225 - name: F1 type: f1 value: 0.9225998021167342 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2149 - Accuracy: 0.9225 - F1: 0.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8325 | 1.0 | 250 | 0.3096 | 0.914 | 0.9136 | | 0.2534 | 2.0 | 500 | 0.2149 | 0.9225 | 0.9226 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1_gamma0
Minbyul
2024-06-18T00:27:58Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/biomistral-7b-wo-kqa_golden-iter-sft-step1_gamma0", "base_model:finetune:Minbyul/biomistral-7b-wo-kqa_golden-iter-sft-step1_gamma0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T00:11:07Z
--- license: apache-2.0 base_model: Minbyul/biomistral-7b-wo-kqa_golden-iter-sft-step1_gamma0 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: biomistral-7b-wo-kqa_golden-iter-dpo-step1_gamma0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-wo-kqa_golden-iter-dpo-step1_gamma0 This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-kqa_golden-iter-sft-step1_gamma0](https://huggingface.co/Minbyul/biomistral-7b-wo-kqa_golden-iter-sft-step1_gamma0) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Rewards/chosen: -0.0011 - Rewards/rejected: 0.0003 - Rewards/accuracies: 0.3333 - Rewards/margins: -0.0014 - Logps/rejected: -193.9042 - Logps/chosen: -136.6186 - Logits/rejected: -2.7172 - Logits/chosen: -3.2298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
kevin36524/ymail_search_qwen2-0.5B-16bit
kevin36524
2024-06-18T00:26:37Z
152
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T00:25:56Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft base_model: unsloth/Qwen2-0.5b-bnb-4bit --- # Uploaded model - **Developed by:** kevin36524 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-0.5b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
oolson/stest
oolson
2024-06-18T00:22:57Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-06-18T00:22:57Z
--- license: apache-2.0 ---
kevin36524/ymail_search_qwen2_0.5B_lora
kevin36524
2024-06-18T00:20:17Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-18T00:20:09Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2-0.5b-bnb-4bit --- # Uploaded model - **Developed by:** kevin36524 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-0.5b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gaodrew/cicero
gaodrew
2024-06-18T00:12:43Z
182
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "la", "dataset:Fece228/latin-literature-dataset-170M", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T09:49:06Z
--- library_name: transformers license: apache-2.0 datasets: - Fece228/latin-literature-dataset-170M language: - la --- Pretrained from scratch using GPT-2 architecture and a dataset of Latin texts ([Corpus Corporum](https://huggingface.co/datasets/Fece228/latin-literature-dataset-170M)) 64 token context, loss 4.5, trained on 1 epoch of 492 million tokens GPT2 style tokenizer trained with min_frequency of 2000 Tends to get repetitive and is not very coherent, due to size and limited data.
datek/Qwen-Qwen1.5-7B-1718669361
datek
2024-06-18T00:09:24Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-18T00:09:22Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
someoneskilled/exbot_v2
someoneskilled
2024-06-18T00:09:07Z
149
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-16T21:44:58Z
--- license: apache-2.0 ---
powermove72/Vortex-1
powermove72
2024-06-18T00:04:40Z
7
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "GritLM/GritLM-7B", "GreenNode/GreenNode-mini-7B-multilingual-v1olet", "conversational", "custom_code", "base_model:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:merge:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:GritLM/GritLM-7B", "base_model:merge:GritLM/GritLM-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T23:30:30Z
--- base_model: - GritLM/GritLM-7B - GreenNode/GreenNode-mini-7B-multilingual-v1olet - GritLM/GritLM-7B - GreenNode/GreenNode-mini-7B-multilingual-v1olet - GritLM/GritLM-7B - GreenNode/GreenNode-mini-7B-multilingual-v1olet - GritLM/GritLM-7B - GreenNode/GreenNode-mini-7B-multilingual-v1olet tags: - merge - mergekit - lazymergekit - GritLM/GritLM-7B - GreenNode/GreenNode-mini-7B-multilingual-v1olet --- # Vortex-1 Vortex-1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) ## 🧩 Configuration ```yaml slices: - sources: - model: GritLM/GritLM-7B layer_range: [0, 4] - sources: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet layer_range: [4, 8] - sources: - model: GritLM/GritLM-7B layer_range: [8, 12] - sources: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet layer_range: [12, 16] - sources: - model: GritLM/GritLM-7B layer_range: [16, 20] - sources: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet layer_range: [20, 24] - sources: - model: GritLM/GritLM-7B layer_range: [24, 28] - sources: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet layer_range: [28, 32] merge_method: passthrough tokenizer_source: union dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "powermove72/Vortex-1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Xu-Ouyang/pythia-70m-deduped-int4-GPTQ-wikitext2
Xu-Ouyang
2024-06-18T00:02:50Z
79
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-06-17T21:25:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
javierorjuela/results
javierorjuela
2024-06-18T00:01:37Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-18T00:01:09Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-multilingual-cased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
annazdr/nace-pl-v2
annazdr
2024-06-17T23:41:12Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:12822", "loss:BatchAllTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-17T23:40:53Z
--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:12822 - loss:BatchAllTripletLoss base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: [] widget: - source_sentence: parcel-packing and gift-wrapping sentences: - retail sale of cleaning products, e - cafeterias - ' ' - source_sentence: Sprzedaż detaliczna mięsa i wyrobów z mięsa sentences: - ' ' - ' revenues from sale of advertising space' - g - source_sentence: g sentences: - installation of the system and provision of training and support to users of the system- activities of auditing and certification of computing and data processing infrastructures and services - ' revenues from sale of advertising space' - 47.75 Retail sale of cosmetic and toilet articles - source_sentence: lighterage, salvage activities sentences: - hairstyling - ' this class also includes: cladding of metal pipes with plastics' - usługi pośrednictwa w zakresie transportu pasażerskiego - source_sentence: manufacture of glass mirrors sentences: - manufacture of electroplating machinery - ' protective face shields/visors, of plastics, e' - cow peas pipeline_tag: sentence-similarity --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("annazdr/nace-pl-v2") # Run inference sentences = [ 'manufacture of glass mirrors', ' protective face shields/visors, of plastics, e', 'manufacture of electroplating machinery', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 12,822 training samples * Columns: <code>sentence_0</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | label | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | type | string | int | | details | <ul><li>min: 2 tokens</li><li>mean: 15.14 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~0.20%</li><li>1: ~0.10%</li><li>2: ~0.20%</li><li>4: ~0.30%</li><li>5: ~0.10%</li><li>6: ~0.10%</li><li>7: ~0.40%</li><li>9: ~0.10%</li><li>10: ~0.60%</li><li>11: ~0.20%</li><li>12: ~0.30%</li><li>13: ~0.30%</li><li>14: ~0.10%</li><li>15: ~0.10%</li><li>16: ~0.40%</li><li>17: ~0.10%</li><li>18: ~0.40%</li><li>20: ~0.40%</li><li>22: ~0.30%</li><li>23: ~0.30%</li><li>24: ~0.30%</li><li>25: ~0.40%</li><li>27: ~0.20%</li><li>28: ~0.10%</li><li>30: ~0.10%</li><li>32: ~0.10%</li><li>33: ~0.20%</li><li>34: ~0.10%</li><li>35: ~0.30%</li><li>37: ~0.30%</li><li>38: ~0.30%</li><li>39: ~0.30%</li><li>41: ~0.20%</li><li>42: ~0.10%</li><li>43: ~0.20%</li><li>44: ~0.50%</li><li>46: ~0.10%</li><li>48: ~0.20%</li><li>49: ~0.30%</li><li>50: ~0.30%</li><li>51: ~0.20%</li><li>52: ~0.40%</li><li>53: ~0.30%</li><li>54: ~0.20%</li><li>55: ~0.20%</li><li>56: ~0.20%</li><li>58: ~0.20%</li><li>59: ~0.10%</li><li>60: ~0.30%</li><li>61: ~0.20%</li><li>63: ~0.40%</li><li>64: ~0.30%</li><li>65: ~0.10%</li><li>66: ~0.70%</li><li>68: ~0.10%</li><li>69: ~0.20%</li><li>70: ~0.50%</li><li>71: ~0.30%</li><li>72: ~0.10%</li><li>73: ~0.40%</li><li>74: ~0.20%</li><li>75: ~0.30%</li><li>76: ~0.20%</li><li>78: ~0.10%</li><li>79: ~0.10%</li><li>80: ~0.10%</li><li>81: ~0.30%</li><li>82: ~0.30%</li><li>83: ~0.30%</li><li>84: ~0.10%</li><li>85: ~0.20%</li><li>86: ~0.20%</li><li>89: ~0.10%</li><li>90: ~0.10%</li><li>91: ~0.30%</li><li>92: ~0.20%</li><li>93: ~0.10%</li><li>94: ~0.30%</li><li>95: ~0.20%</li><li>96: ~0.20%</li><li>97: ~0.40%</li><li>98: ~0.70%</li><li>99: ~0.20%</li><li>100: ~0.50%</li><li>101: ~0.20%</li><li>102: ~0.10%</li><li>103: ~0.10%</li><li>104: ~0.20%</li><li>106: ~0.10%</li><li>108: ~0.20%</li><li>110: ~0.10%</li><li>111: ~0.10%</li><li>112: ~0.20%</li><li>115: ~0.10%</li><li>116: ~0.10%</li><li>119: ~0.30%</li><li>120: ~0.10%</li><li>121: ~0.20%</li><li>123: ~0.10%</li><li>125: ~0.20%</li><li>126: ~0.10%</li><li>127: ~0.20%</li><li>128: ~0.40%</li><li>130: ~0.20%</li><li>134: ~0.10%</li><li>135: ~0.10%</li><li>136: ~0.10%</li><li>138: ~0.10%</li><li>139: ~0.10%</li><li>140: ~0.20%</li><li>141: ~0.10%</li><li>142: ~0.10%</li><li>143: ~0.40%</li><li>144: ~0.10%</li><li>148: ~0.10%</li><li>149: ~0.10%</li><li>150: ~0.30%</li><li>151: ~0.10%</li><li>152: ~0.30%</li><li>153: ~0.40%</li><li>154: ~0.50%</li><li>156: ~0.10%</li><li>157: ~0.30%</li><li>158: ~0.20%</li><li>159: ~0.30%</li><li>160: ~0.10%</li><li>161: ~0.10%</li><li>162: ~0.10%</li><li>163: ~0.10%</li><li>165: ~0.10%</li><li>166: ~0.20%</li><li>167: ~0.20%</li><li>168: ~0.20%</li><li>170: ~0.10%</li><li>171: ~0.10%</li><li>172: ~0.10%</li><li>173: ~0.10%</li><li>174: ~0.20%</li><li>176: ~0.20%</li><li>178: ~0.10%</li><li>179: ~0.10%</li><li>181: ~0.10%</li><li>182: ~0.30%</li><li>183: ~0.30%</li><li>184: ~0.20%</li><li>185: ~0.30%</li><li>186: ~0.40%</li><li>187: ~0.20%</li><li>188: ~0.40%</li><li>189: ~0.20%</li><li>190: ~0.50%</li><li>191: ~0.30%</li><li>192: ~0.40%</li><li>193: ~0.10%</li><li>196: ~0.20%</li><li>197: ~0.20%</li><li>198: ~0.30%</li><li>199: ~0.60%</li><li>200: ~0.50%</li><li>201: ~0.10%</li><li>202: ~0.10%</li><li>203: ~0.30%</li><li>204: ~0.10%</li><li>205: ~0.30%</li><li>206: ~0.40%</li><li>208: ~0.20%</li><li>210: ~0.20%</li><li>211: ~0.40%</li><li>212: ~0.20%</li><li>214: ~0.30%</li><li>215: ~0.10%</li><li>217: ~0.30%</li><li>218: ~0.20%</li><li>220: ~0.30%</li><li>221: ~0.10%</li><li>222: ~0.20%</li><li>223: ~0.10%</li><li>225: ~0.10%</li><li>226: ~0.10%</li><li>227: ~0.20%</li><li>228: ~0.10%</li><li>230: ~0.10%</li><li>231: ~0.30%</li><li>233: ~0.10%</li><li>234: ~0.10%</li><li>235: ~0.20%</li><li>236: ~0.20%</li><li>237: ~0.20%</li><li>238: ~0.30%</li><li>239: ~0.10%</li><li>240: ~0.10%</li><li>241: ~0.20%</li><li>242: ~0.10%</li><li>243: ~0.40%</li><li>244: ~0.40%</li><li>245: ~0.20%</li><li>246: ~0.20%</li><li>247: ~0.30%</li><li>248: ~0.20%</li><li>249: ~0.20%</li><li>250: ~0.10%</li><li>253: ~0.30%</li><li>254: ~0.50%</li><li>255: ~0.30%</li><li>256: ~0.20%</li><li>257: ~0.20%</li><li>258: ~0.20%</li><li>259: ~0.10%</li><li>260: ~0.60%</li><li>261: ~0.10%</li><li>262: ~0.10%</li><li>264: ~0.30%</li><li>266: ~0.10%</li><li>267: ~0.10%</li><li>269: ~0.20%</li><li>271: ~0.10%</li><li>272: ~0.10%</li><li>273: ~0.10%</li><li>274: ~0.40%</li><li>275: ~0.10%</li><li>276: ~0.30%</li><li>277: ~0.20%</li><li>278: ~0.10%</li><li>279: ~0.20%</li><li>281: ~0.10%</li><li>283: ~0.40%</li><li>284: ~0.10%</li><li>285: ~0.20%</li><li>286: ~0.10%</li><li>287: ~0.20%</li><li>289: ~0.20%</li><li>290: ~0.20%</li><li>291: ~0.20%</li><li>292: ~0.30%</li><li>293: ~0.20%</li><li>294: ~0.20%</li><li>295: ~0.40%</li><li>296: ~0.20%</li><li>297: ~0.20%</li><li>298: ~0.10%</li><li>302: ~0.10%</li><li>303: ~0.10%</li><li>306: ~0.60%</li><li>307: ~0.50%</li><li>310: ~0.40%</li><li>311: ~0.40%</li><li>313: ~0.10%</li><li>314: ~0.40%</li><li>316: ~0.10%</li><li>319: ~0.20%</li><li>320: ~0.10%</li><li>322: ~0.50%</li><li>324: ~0.20%</li><li>325: ~0.30%</li><li>326: ~0.30%</li><li>327: ~0.10%</li><li>328: ~0.10%</li><li>329: ~0.10%</li><li>330: ~0.10%</li><li>331: ~0.10%</li><li>332: ~0.20%</li><li>334: ~0.10%</li><li>336: ~0.30%</li><li>337: ~0.50%</li><li>338: ~0.10%</li><li>341: ~0.10%</li><li>343: ~0.10%</li><li>344: ~0.20%</li><li>347: ~0.20%</li><li>348: ~0.10%</li><li>349: ~0.10%</li><li>350: ~0.50%</li><li>351: ~0.70%</li><li>352: ~0.20%</li><li>353: ~0.10%</li><li>354: ~0.20%</li><li>355: ~0.10%</li><li>356: ~0.10%</li><li>357: ~0.20%</li><li>358: ~0.30%</li><li>359: ~0.10%</li><li>360: ~0.20%</li><li>361: ~0.30%</li><li>362: ~0.10%</li><li>363: ~0.10%</li><li>364: ~0.10%</li><li>365: ~0.30%</li><li>368: ~0.30%</li><li>369: ~0.20%</li><li>372: ~0.30%</li><li>373: ~0.10%</li><li>374: ~0.30%</li><li>375: ~0.70%</li><li>376: ~0.10%</li><li>377: ~0.20%</li><li>378: ~0.20%</li><li>380: ~0.10%</li><li>381: ~0.10%</li><li>382: ~0.20%</li><li>383: ~0.10%</li><li>385: ~0.20%</li><li>393: ~0.10%</li><li>394: ~0.10%</li><li>395: ~0.20%</li><li>396: ~0.30%</li><li>398: ~0.10%</li><li>399: ~0.20%</li><li>401: ~0.20%</li><li>402: ~0.20%</li><li>404: ~0.40%</li><li>405: ~0.10%</li><li>407: ~0.20%</li><li>409: ~0.20%</li><li>410: ~0.10%</li><li>411: ~0.10%</li><li>412: ~0.10%</li><li>413: ~0.20%</li><li>414: ~0.20%</li><li>415: ~0.10%</li><li>416: ~0.10%</li><li>417: ~0.10%</li><li>418: ~0.10%</li><li>419: ~0.20%</li><li>420: ~0.10%</li><li>421: ~0.20%</li><li>423: ~0.30%</li><li>424: ~0.10%</li><li>425: ~0.10%</li><li>427: ~0.20%</li><li>428: ~0.10%</li><li>429: ~0.10%</li><li>430: ~0.10%</li><li>432: ~0.10%</li><li>434: ~0.10%</li><li>435: ~0.40%</li><li>436: ~0.20%</li><li>437: ~0.30%</li><li>438: ~0.20%</li><li>440: ~0.20%</li><li>441: ~0.30%</li><li>442: ~0.20%</li><li>443: ~0.10%</li><li>444: ~0.30%</li><li>445: ~0.20%</li><li>446: ~0.20%</li><li>448: ~0.20%</li><li>449: ~0.30%</li><li>451: ~0.20%</li><li>452: ~0.10%</li><li>454: ~0.20%</li><li>455: ~0.20%</li><li>456: ~0.10%</li><li>458: ~0.30%</li><li>459: ~0.10%</li><li>460: ~0.10%</li><li>462: ~0.10%</li><li>463: ~0.40%</li><li>464: ~0.10%</li><li>465: ~0.20%</li><li>466: ~0.10%</li><li>467: ~0.40%</li><li>468: ~0.10%</li><li>469: ~0.30%</li><li>471: ~0.10%</li><li>475: ~0.30%</li><li>476: ~0.50%</li><li>477: ~0.10%</li><li>479: ~0.40%</li><li>480: ~0.30%</li><li>482: ~0.10%</li><li>483: ~0.30%</li><li>484: ~0.10%</li><li>485: ~0.20%</li><li>486: ~0.10%</li><li>487: ~0.10%</li><li>490: ~0.30%</li><li>491: ~0.40%</li><li>492: ~0.40%</li><li>493: ~0.10%</li><li>494: ~0.10%</li><li>495: ~0.10%</li><li>498: ~0.20%</li><li>499: ~0.40%</li><li>500: ~0.30%</li><li>501: ~0.30%</li><li>502: ~0.30%</li><li>504: ~0.20%</li><li>505: ~0.20%</li><li>506: ~0.10%</li><li>507: ~0.20%</li><li>508: ~0.10%</li><li>511: ~0.10%</li><li>512: ~0.60%</li><li>513: ~0.10%</li><li>515: ~0.10%</li><li>516: ~0.30%</li><li>517: ~0.40%</li><li>519: ~0.30%</li><li>520: ~0.30%</li><li>521: ~0.10%</li><li>522: ~0.20%</li><li>523: ~0.10%</li><li>524: ~0.50%</li><li>525: ~0.60%</li><li>527: ~0.20%</li><li>528: ~0.10%</li><li>530: ~0.10%</li><li>533: ~0.40%</li><li>534: ~0.50%</li><li>535: ~0.40%</li><li>536: ~0.10%</li><li>537: ~0.20%</li><li>538: ~0.40%</li><li>539: ~0.10%</li><li>540: ~0.10%</li><li>542: ~0.30%</li><li>543: ~0.10%</li><li>544: ~0.10%</li><li>545: ~0.20%</li><li>546: ~0.20%</li><li>548: ~0.20%</li><li>549: ~0.20%</li><li>550: ~0.30%</li><li>551: ~0.30%</li><li>552: ~0.10%</li><li>554: ~0.10%</li><li>555: ~0.20%</li><li>557: ~0.20%</li><li>560: ~0.10%</li><li>561: ~0.20%</li><li>562: ~0.10%</li><li>564: ~0.40%</li><li>565: ~0.10%</li><li>566: ~0.10%</li><li>567: ~0.20%</li><li>570: ~0.10%</li><li>572: ~0.30%</li><li>573: ~0.10%</li><li>574: ~0.10%</li><li>575: ~0.10%</li><li>576: ~0.10%</li><li>577: ~0.20%</li><li>578: ~0.50%</li><li>579: ~0.40%</li><li>581: ~0.20%</li><li>585: ~0.40%</li><li>586: ~0.10%</li><li>587: ~0.20%</li><li>588: ~0.20%</li><li>590: ~0.20%</li><li>592: ~0.10%</li><li>595: ~0.10%</li><li>597: ~0.20%</li><li>600: ~0.10%</li><li>601: ~0.10%</li><li>603: ~0.10%</li><li>604: ~0.10%</li><li>608: ~0.10%</li><li>611: ~0.10%</li><li>612: ~0.20%</li><li>613: ~0.10%</li><li>619: ~0.20%</li><li>620: ~0.20%</li><li>622: ~0.10%</li><li>625: ~0.20%</li><li>629: ~0.10%</li><li>631: ~0.20%</li><li>632: ~0.10%</li><li>633: ~0.20%</li><li>634: ~0.10%</li><li>635: ~0.40%</li><li>640: ~0.10%</li><li>643: ~0.10%</li><li>645: ~0.10%</li><li>648: ~0.10%</li></ul> | * Samples: | sentence_0 | label | |:----------------------------------------------------------------------------------|:-----------------| | <code>swimming clubs</code> | <code>475</code> | | <code> </code> | <code>581</code> | | <code>this class includes: mining of ores valued chiefly for iron content</code> | <code>351</code> | * Loss: [<code>BatchAllTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchalltripletloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `num_train_epochs`: 4 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### BatchAllTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
MrezaPRZ/codegemma_data_augmentation_bird_combined_with_synethetic_bird_dev
MrezaPRZ
2024-06-17T23:40:12Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T22:02:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
powermove72/Shark-1
powermove72
2024-06-17T23:38:46Z
10
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "GritLM/GritLM-7B", "argilla/notus-7b-v1", "GreenNode/GreenNode-mini-7B-multilingual-v1olet", "conversational", "custom_code", "base_model:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:merge:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:GritLM/GritLM-7B", "base_model:merge:GritLM/GritLM-7B", "base_model:argilla/notus-7b-v1", "base_model:merge:argilla/notus-7b-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T23:06:17Z
--- base_model: - GritLM/GritLM-7B - argilla/notus-7b-v1 - GreenNode/GreenNode-mini-7B-multilingual-v1olet tags: - merge - mergekit - lazymergekit - GritLM/GritLM-7B - argilla/notus-7b-v1 - GreenNode/GreenNode-mini-7B-multilingual-v1olet --- # Shark-1 Shark-1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) * [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1) * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) ## 🧩 Configuration ```yaml slices: - sources: - model: GritLM/GritLM-7B layer_range: [0, 8] - sources: - model: argilla/notus-7b-v1 layer_range: [8, 20] - sources: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet layer_range: [20, 32] merge_method: passthrough tokenizer_source: union dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "powermove72/Shark-1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MarOsz/whisper-small-polish-peft-simple
MarOsz
2024-06-17T23:20:07Z
8
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-small", "base_model:adapter:openai/whisper-small", "region:us" ]
null
2024-06-16T17:22:44Z
--- library_name: peft base_model: openai/whisper-small --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.2.dev0
mnemic/ElementMix-PDXL-LoRA
mnemic
2024-06-17T23:17:17Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:28:59Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: ElementsMix, wind, water, earth, fire --- # ElementMix - PDXL - LoRA [CivitAI Page](https://civitai.com/models/493769) ## Trigger Words ```ElementsMix, wind, water, earth, fire``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/EkhCE0BtTyIQxnjgs1zFF.jpeg) A Mixture of Elements model. The sum turned out to be better than the parts!
mnemic/CakeStyle-PDXL-LoRA
mnemic
2024-06-17T23:16:57Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:22:19Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: CakeStyle --- # CakeStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/398363) ## Trigger Words ```CakeStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/Qoq02ze3Ain6TqSjUCb6P.jpeg) Turn anything into a cake!
tomg-group-umd/GenQA-llama-3
tomg-group-umd
2024-06-17T23:15:41Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T23:08:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mnemic/WrongHoleXL-SDXL-LoRA
mnemic
2024-06-17T23:15:27Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:19:49Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: WrongHole --- # WrongHoleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349150) ## Trigger Words ```WrongHole``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/w1Ln9hAdzBJFyibdX91M9.jpeg) What if you had the power to add a hole to anything?
mnemic/WhiteboxStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:15:21Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:19:19Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: WhiteboxStyle --- # WhiteboxStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347185) ## Trigger Words ```WhiteboxStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/-kaEMceOv3YksEMezuIA9.jpeg) A level design support model.
mnemic/SemlaStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:15:05Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:16:23Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: SemlaStyle --- # SemlaStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/337973) ## Trigger Words ```SemlaStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/v5dHJVablEXrnCQayrm0R.jpeg) Everything is better in semla form.
mnemic/ScienceDNAStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:14:58Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:15:58Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ScienceDNAStyle --- # ScienceDNAStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/153507) ## Trigger Words ```ScienceDNAStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/mPwskterFC-VEn_oG_PK9.jpeg) How are things made? With science!
mnemic/HornyfierXL-SDXL-LoRA
mnemic
2024-06-17T23:14:49Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:11:02Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: Hornyfier --- # HornyfierXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349174) ## Trigger Words ```Hornyfier``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/y6DYjPis_eW5xEgJXC2De.jpeg) Adds horns to anything. I mean anything, I dare you.
mnemic/DavyJonesLockerStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:14:10Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:02:24Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: DavyJonesLockerStyle --- # DavyJonesLockerStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/220258) ## Trigger Words ```DavyJonesLockerStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/QtspT8GgQIgwjWjxgN041.jpeg) Adds a bit of underwater musky smell to all your images.
mnemic/dAIversityLoRASDXL-PhotoSemiReal-SDXL-LoRA
mnemic
2024-06-17T23:13:52Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:01:04Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: --- # dAIversityLoRASDXL-PhotoSemiReal - SDXL - LoRA [CivitAI Page](https://civitai.com/models/477136) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/tyURqDauyVNAhF18op-Lm.jpeg) An expremintal detailer LoRA. It currently adds a bit too much style.
mnemic/ChocolateWetStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:12:30Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:57:16Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ChocolateWetStyle --- # ChocolateWetStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/337992) ## Trigger Words ```ChocolateWetStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/OJAWj_CJfJCpeWIdsOITC.jpeg) Put chocolate on almost anything.
mnemic/CheeseOnTopStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:12:21Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:56:45Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: CheeseOnTopStyle --- # CheeseOnTopStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347714) ## Trigger Words ```CheeseOnTopStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/ZOvd032o2P_jcRH5ifMYN.jpeg) Puts color-prompted melted goop on things.
mnemic/CakeStyleXL-SDXL-LoRA
mnemic
2024-06-17T23:12:16Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:55:00Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: CakeStyle --- # CakeStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347704) ## Trigger Words ```CakeStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/K7fO6eOgHcmmdR8z76PhZ.jpeg) Turn anything into a cake! Works great with the SDXL base model!
mnemic/TransformersStyle-SD1.5-LoRA
mnemic
2024-06-17T23:09:57Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:51:26Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: TransformersStyle --- # TransformersStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/216460) ## Trigger Words ```TransformersStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/6m3JVr7axdV1ij_BQw6Ue.jpeg) Transform into a transformer using transformers!
mnemic/SwedishDesserts-SD1.5-LoRA
mnemic
2024-06-17T23:09:49Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:51:02Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: ChocolateBall, SaffronBun, ApplePie, CinnamonRoll, DaimCake, MarengueCake, RiceAlaMalta, StrawberryCake, RosehipSoup, Butterscotch, PrincessCake, Spettekaka, CheeseCake, RhubarbPie --- # SwedishDesserts - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/182385) ## Trigger Words ```ChocolateBall, SaffronBun, ApplePie, CinnamonRoll, DaimCake, MarengueCake, RiceAlaMalta, StrawberryCake, RosehipSoup, Butterscotch, PrincessCake, Spettekaka, CheeseCake, RhubarbPie``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/utE2JJgIQPHja9O9nroRj.jpeg) Enjoy some Swedish desserts.
mnemic/ScienceDNAStyle-SD1.5-LoRA
mnemic
2024-06-17T23:09:14Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:49:27Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: ScienceDNAStyle --- # ScienceDNAStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/153507) ## Trigger Words ```ScienceDNAStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/_5gx6z27cl5vv0zUEK_Zv.jpeg) How are things made? With science!
mnemic/PeachFuzz-SD1.5-LoRA
mnemic
2024-06-17T23:07:46Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:48:20Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: --- # PeachFuzz - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/80920) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/_z2skQ_NHjMUSCg5JUSsA.jpeg) Is meant to enhance peach fuzz (vellus hair) on the body.
mnemic/HalloweenGlowStyle-SD1.5-LoRA
mnemic
2024-06-17T23:05:00Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:46:14Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: HalloweenGlowStyle --- # HalloweenGlowStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/174055) ## Trigger Words ```HalloweenGlowStyle``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/gvDu0pjyUHkcF6Wdy1gEr.jpeg) A glowing Halloween style.
mnemic/GalacticEmpireStyle-SD1.5-LoRA
mnemic
2024-06-17T23:02:59Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:44:42Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: GalacticEmpireStyle --- # GalacticEmpireStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/) ## Trigger Words ```GalacticEmpireStyle``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/XTbk9HU8ce8XwRYxEyaC3.png) Use this LoRA to find those rebel scum!
mnemic/FluffyStyle-SD1.5-LoRA
mnemic
2024-06-17T23:02:30Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:44:10Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: FluffyStyle --- # FluffyStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/135871) ## Trigger Words ```FluffyStyle``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/3rTSEjwN336W3RzjObnsx.png) Fluffy, furry, fuzzy soft and cuddly things!
5thCinematic/personalized-subscription-cuts
5thCinematic
2024-06-17T22:58:57Z
0
0
null
[ "feature-extraction", "en", "dataset:ruslanmv/ai-medical-chatbot", "license:bigscience-openrail-m", "region:us" ]
feature-extraction
2024-06-17T21:52:17Z
--- license: bigscience-openrail-m datasets: - ruslanmv/ai-medical-chatbot language: - en metrics: - accuracy pipeline_tag: feature-extraction ---
roeybc/bert-base-uncased-finetuned-swag
roeybc
2024-06-17T22:57:22Z
4
0
transformers
[ "transformers", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2024-06-17T22:20:11Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0054 - Accuracy: 0.7890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7657 | 1.0 | 4597 | 0.6017 | 0.7661 | | 0.3834 | 2.0 | 9194 | 0.6371 | 0.7886 | | 0.1364 | 3.0 | 13791 | 1.0054 | 0.7890 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF
CISCai
2024-06-17T22:57:09Z
659
1
null
[ "gguf", "code", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2401.06066", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-06-17T21:05:11Z
--- license: other license_name: deepseek-license license_link: https://github.com/deepseek-ai/DeepSeek-Coder-V2/raw/main/LICENSE-MODEL tags: - code language: - code base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct model_creator: DeepSeek AI model_name: DeepSeek-Coder-V2-Lite-Instruct model_type: deepseek2 datasets: - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: CISC --- # DeepSeek-Coder-V2-Lite-Instruct - SOTA GGUF - Model creator: [DeepSeek AI](https://huggingface.co/deepseek-ai) - Original model: [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) <!-- description start --> ## Description This repo contains State Of The Art quantized GGUF format model files for [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct). Quantization was done with an importance matrix that was trained for ~250K tokens (64 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset. Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code). NOTE: Due to some of the tensors in this model being oddly shaped a consequential portion of the quantization fell back to IQ4_NL instead of the specified method, causing somewhat larger (and "smarter"; even IQ1_M is quite usable) model files than usual! <!-- description end --> <!-- prompt-template start --> ## Prompt template: DeepSeek v2 ``` User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv3 files are compatible with llama.cpp from May 29th 2024 onwards, as of commit [fb76ec2](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c) They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw) * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf) | IQ1_S | 1 | 4.5 GB| 5.5 GB | smallest, significant quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf) | IQ1_M | 1 | 4.7 GB| 5.7 GB | very small, significant quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 5.1 GB| 6.1 GB | very small, high quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 5.4 GB| 6.4 GB | very small, high quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf) | IQ2_S | 2 | 5.4 GB| 6.4 GB | small, substantial quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf) | IQ2_M | 2 | 5.7 GB| 6.7 GB | small, greater quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 6.3 GB| 7.3 GB | very small, high quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 6.5 GB| 7.5 GB | small, substantial quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf) | IQ3_S | 3 | 6.8 GB| 7.8 GB | small, greater quality loss | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf) | IQ3_M | 3 | 6.9 GB| 7.9 GB | medium, balanced quality - recommended | | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf) | IQ4_NL | 4 | 8.1 GB| 9.1 GB | small, substantial quality loss | Generated importance matrix file: [DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat) **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [fb76ec3](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c) or later. ```shell ./llama-cli -ngl 28 -m DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf --color -c 131072 --temp 0 --repeat-penalty 1.1 -p "User: {prompt}\n\nAssistant:" ``` Change `-ngl 28` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 131072` to the desired sequence length. If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size). There is a similar option for V-cache (`-ctv`), however that requires Flash Attention [which is not working yet with this model](https://github.com/ggerganov/llama.cpp/issues/7343). For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/). #### First install the package Run one of the following commands, according to your system: ```shell # Prebuilt wheel with basic CPU support pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # Prebuilt wheel with NVidia CUDA acceleration pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.) # Prebuilt wheel with Metal GPU acceleration pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal # Build base version with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # Or with Vulkan acceleration CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python # Or with Kompute acceleration CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python # Or with SYCL acceleration CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_CUDA=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Chat Completion API llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072) print(llm.create_chat_completion( repeat_penalty = 1.1, messages = [ { "role": "user", "content": "Pick a LeetCode challenge and solve it in Python." } ] )) ``` #### Simple llama-cpp-python example fill-in-middle code ```python from llama_cpp import Llama # Completion API prompt = "def add(" suffix = "\n return sum\n\n" llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072) output = llm.create_completion( temperature = 0.0, repeat_penalty = 1.0, prompt = prompt, suffix = suffix ) # Models sometimes repeat suffix in response, attempt to filter that response = output["choices"][0]["text"] response_stripped = response.rstrip() unwanted_response_suffix = suffix.rstrip() unwanted_response_length = len(unwanted_response_suffix) filtered = False if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix: response = response_stripped[:-unwanted_response_length] filtered = True print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{suffix}\033[0m") ``` <!-- README_GGUF.md-how-to-run end --> <!-- original-model-card start --> <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;"> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;"> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="#4-api-platform">API Platform</a> | <a href="#5-how-to-run-locally">How to Use</a> | <a href="#6-license">License</a> | </p> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a> </p> # DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence ## 1. Introduction We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. <p align="center"> <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true"> </p> In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper. ## 2. Model Downloads We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public. <div align="center"> | **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** | | :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: | | DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) | | DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | | DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) | | DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) | </div> ## 3. Chat Website You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in) ## 4. API Platform We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/). Sign up for over millions of free tokens. And you can also pay-as-you-go at an unbeatable price. <p align="center"> <img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true"> </p> ## 5. How to run locally **Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.** ### Inference with Huggingface's Transformers You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. #### Code Completion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() input_text = "#write a quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` #### Code Insertion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() input_text = """<|fim▁begin|>def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[0] left = [] right = [] <|fim▁hole|> if arr[i] < pivot: left.append(arr[i]) else: right.append(arr[i]) return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>""" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):]) ``` #### Chat Completion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository. An example of chat template is as belows: ```bash <|begin▁of▁sentence|>User: {user_message_1} Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} Assistant: ``` You can also add an optional system message: ```bash <|begin▁of▁sentence|>{system_message} User: {user_message_1} Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} Assistant: ``` ### Inference with vLLM (recommended) To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650. ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams max_model_len, tp_size = 8192, 1 model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True) sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id]) messages_list = [ [{"role": "user", "content": "Who are you?"}], [{"role": "user", "content": "write a quick sort algorithm in python."}], [{"role": "user", "content": "Write a piece of quicksort code in C++."}], ] prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list] outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params) generated_text = [output.outputs[0].text for output in outputs] print(generated_text) ``` ## 6. License This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. ## 7. Contact If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
mnemic/ChocolateWetStyle-SD1.5-LoRA
mnemic
2024-06-17T22:55:08Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:38:43Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: ChocolateWetStyle --- # ChocolateWetStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/67132) ## Trigger Words ```ChocolateWetStyle``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/GAlb8Zax6yzsAHZbX9B1w.png) Put chocolate on almost anything.
Soughing/Qwen_scratch_base-checkpoint-1000
Soughing
2024-06-17T22:53:08Z
150
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T22:46:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
amy011872/LawToken-0.5B-a2
amy011872
2024-06-17T22:52:51Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:Qwen/Qwen2-0.5B", "base_model:finetune:Qwen/Qwen2-0.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T16:20:20Z
--- license: apache-2.0 base_model: Qwen/Qwen2-0.5B tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: LawToken-0.5B-a2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LawToken-0.5B-a2 This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.8634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.14 | 0.14 | 10000 | 1.1605 | | 1.0485 | 0.28 | 20000 | 1.0663 | | 1.0592 | 0.42 | 30000 | 1.0069 | | 0.9293 | 0.56 | 40000 | 0.9609 | | 0.8503 | 0.71 | 50000 | 0.9210 | | 0.9322 | 0.85 | 60000 | 0.8858 | | 0.8238 | 0.99 | 70000 | 0.8634 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.3.0a0+ebedce2 - Datasets 2.19.1 - Tokenizers 0.15.2
mnemic/WhiteboxStyle-PDXL-LoRA
mnemic
2024-06-17T22:47:36Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:32:36Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: WhiteboxStyle --- # WhiteboxStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/402937) ## Trigger Words ```WhiteboxStyle``` ![Model Preview](https://huggingface.co/mnemic/WhiteboxStyle-PDXL-LoRA/raw/main/WhiteboxStylePony.preview.png) A level design support model.
mnemic/TransformersStyle-PDXL-LoRA
mnemic
2024-06-17T22:47:32Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:32:05Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: TransformersStyle --- # TransformersStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/402931) ## Trigger Words ```TransformersStyle``` ![Model Preview](https://huggingface.co/mnemic/TransformersStyle-PDXL-LoRA/raw/main/TransformersStylePony.preview.png) Transform everything a transformer using transformers!
mnemic/FluffyStyle-PDXL-LoRA
mnemic
2024-06-17T22:47:22Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:30:30Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: FluffyStyle --- # FluffyStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/402870) ## Trigger Words ```FluffyStyle``` ![Model Preview](https://huggingface.co/mnemic/FluffyStyle-PDXL-LoRA/raw/main/FluffyStylePony.preview.png) Fluffy, furry, fuzzy soft and cuddly things!
mnemic/DavyJonesLockerStyle-PDXL-LoRA
mnemic
2024-06-17T22:46:59Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:27:30Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: DavyJonesLockerStyle --- # DavyJonesLockerStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/401224) ## Trigger Words ```DavyJonesLockerStyle``` ![Model Preview](https://huggingface.co/mnemic/DavyJonesLockerStyle-PDXL-LoRA/raw/main/DavyJonesLockerStylePony.preview.png) Adds a bit of underwater musky smell to all your images.
mnemic/ChristmasWintery-PDXL-LoRA
mnemic
2024-06-17T22:46:46Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:25:38Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: ChristmasWintery --- # ChristmasWintery - PDXL - LoRA [CivitAI Page](https://civitai.com/models/400528) ## Trigger Words ```ChristmasWintery``` ![Model Preview](https://huggingface.co/mnemic/ChristmasWintery-PDXL-LoRA/raw/main/ChristmasWinteryPony.preview.png) Snowing Christmas style.
mnemic/ChocolateWetStyle-PDXL-LoRA
mnemic
2024-06-17T22:46:38Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:24:39Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: ChocolateWetStyle --- # ChocolateWetStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/400474) ## Trigger Words ```ChocolateWetStyle``` ![Model Preview](https://huggingface.co/mnemic/ChocolateWetStyle-PDXL-LoRA/raw/main/ChocolateWetStylePony.preview.png) Put chocolate on almost anything.
mnemic/CheeseOnTopStyle-PDXL-LoRA
mnemic
2024-06-17T22:46:35Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:24:12Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: CheeseOnTopStyle --- # CheeseOnTopStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/400451) ## Trigger Words ```CheeseOnTopStyle``` ![Model Preview](https://huggingface.co/mnemic/CheeseOnTopStyle-PDXL-LoRA/raw/main/CheeseOnTopStylePony.preview.png) Puts color-prompted melted goop on things.
mnemic/CarnageStyle-PDXL-LoRA
mnemic
2024-06-17T22:46:28Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:23:15Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: CarnageStyle --- # CarnageStyle - PDXL - LoRA [CivitAI Page](https://civitai.com/models/400382) ## Trigger Words ```CarnageStyle``` ![Model Preview](https://huggingface.co/mnemic/CarnageStyle-PDXL-LoRA/raw/main/CarnageStylePony.preview.png) Some kind of Carnage style. It's not as strong as the SD1.5 version.
mnemic/BatmanCore-PDXL-LoRA
mnemic
2024-06-17T22:46:13Z
0
0
null
[ "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:finetune:AstraliteHeart/pony-diffusion-v6", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:21:22Z
--- license: gpl-3.0 base_model: AstraliteHeart/pony-diffusion-v6 trained_words: BatmanCore --- # BatmanCore - PDXL - LoRA [CivitAI Page](https://civitai.com/models/398974) ## Trigger Words ```BatmanCore``` ![Model Preview](https://huggingface.co/mnemic/BatmanCore-PDXL-LoRA/raw/main/BatmanCorePony.preview.png) It's Batman! It puts spikes and wings on things and black armor on people.
mnemic/WaffleStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:45:55Z
0
1
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:18:49Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: WaffleStyle --- # WaffleStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347152) ## Trigger Words ```WaffleStyle``` ![Model Preview](https://huggingface.co/mnemic/WaffleStyleXL-SDXL-LoRA/raw/main/WaffleStyleXL.preview.png) Adds a lot of square grids to things.
mnemic/TransformersStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:45:51Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:18:22Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: TransformersStyle --- # TransformersStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349158) ## Trigger Words ```TransformersStyle``` ![Model Preview](https://huggingface.co/mnemic/TransformersStyleXL-SDXL-LoRA/raw/main/TransformersStyleXL.preview.png) Transform things into a transformer using transformers!
mnemic/SwedishDessertsXL-SDXL-LoRA
mnemic
2024-06-17T22:45:47Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:17:54Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ApplePie, Butterscotch, CheeseCake, ChocolateBall, CinnamonRoll, DaimCake, MarengueCake, PrincessCake, RhubarbPie, RiceAlaMalta, RosehipSoup, SaffronBun, Spettekaka, StrawberryCake --- # SwedishDessertsXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349098) ## Trigger Words ```ApplePie, Butterscotch, CheeseCake, ChocolateBall, CinnamonRoll, DaimCake, MarengueCake, PrincessCake, RhubarbPie, RiceAlaMalta, RosehipSoup, SaffronBun, Spettekaka, StrawberryCake``` ![Model Preview](https://huggingface.co/mnemic/SwedishDessertsXL-SDXL-LoRA/raw/main/SwedishDessertsXL.preview.png) Enjoy some Swedish desserts.
mnemic/SpyWorld50sXL-SDXL-LoRA
mnemic
2024-06-17T22:45:40Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:16:54Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: SpyWorld50s --- # SpyWorld50sXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347788) ## Trigger Words ```SpyWorld50s``` ![Model Preview](https://huggingface.co/mnemic/SpyWorld50sXL-SDXL-LoRA/raw/main/SpyWorld50sXL.preview.png) Is that a camera or are you just happy to see me?
mnemic/P14n03l3g4nt3b0n3XL-SDXL-LoRA
mnemic
2024-06-17T22:45:28Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:15:23Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: P14n03l3g4nt3b0n3 --- # P14n03l3g4nt3b0n3XL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349112) ## Trigger Words ```P14n03l3g4nt3b0n3``` ![Model Preview](https://huggingface.co/mnemic/P14n03l3g4nt3b0n3XL-SDXL-LoRA/raw/main/P14n03l3g4nt3b0n3XL.preview.png) A beautiful ebony and ivory style.
mnemic/NESStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:45:19Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:14:17Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: NESStyle --- # NESStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347769) ## Trigger Words ```NESStyle``` ![Model Preview](https://huggingface.co/mnemic/NESStyleXL-SDXL-LoRA/raw/main/NESStyleXL.preview.png) What if everything was as beautiful as a NES?
mnemic/MinionStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:45:10Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:13:14Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: MinionStyle --- # MinionStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347778) ## Trigger Words ```MinionStyle``` ![Model Preview](https://huggingface.co/mnemic/MinionStyleXL-SDXL-LoRA/raw/main/MinionStyleXL.preview.png) Anything can be a minion!