modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 06:26:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 06:26:41
card
stringlengths
11
1.01M
davideaguglia/ppo-Huggy
davideaguglia
2024-05-04T14:39:21Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-04T08:11:48Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: davideaguglia/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix
Lewdiculous
2024-05-04T14:39:16Z
215
19
null
[ "gguf", "roleplay", "llama3", "sillytavern", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-22T19:00:39Z
--- tags: - roleplay - llama3 - sillytavern - gguf license: apache-2.0 --- > [!TIP] > **Support:** <br> > My upload speeds have been cooked and unstable lately. <br> > Realistically I'd need to move to get a better provider. <br> > If you **want** and you are able to... <br> > [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br> > I apologize for disrupting your experience. **This is a Llama-3 land now, cowboys!** "A chaotic force beckons for you, will you heed her call?" GGUF-IQ-Imatrix quants for [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B). > [!IMPORTANT] > **Updated!** > These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br> > Use **KoboldCpp version 1.64** or higher. > [!NOTE] > **Quant:** <br> > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. > [!WARNING] > Recommended presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here](https://huggingface.co/Virt-io/SillyTavern-Presets). <br> > Use the latest version of KoboldCpp. **Use the provided presets.** <br> > This is all still highly experimental, modified configs were used to avoid the tokenizer issues, let the authors know how it performs for you, feedback is more important than ever now. **Original model information:** # Chaos RP ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/u5p9kdbXT2QQA3iMU0vF1.png) A chaotic force beckons for you, will you heed her call? Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort. Enjoy!
himum/sn6_0s
himum
2024-05-04T14:38:13Z
4
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T08:05:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Phummarin/Phummarin
Phummarin
2024-05-04T14:37:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T14:37:20Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Phummarin - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SyntheticVoicess/Adam_11labs
SyntheticVoicess
2024-05-04T14:32:29Z
9
9
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2023-10-05T21:10:18Z
If you found this helpful please credit my youtube : https://www.youtube.com/channel/UCg_TbkAQVs_qvimShR08IYw https://www.buymeacoffee.com/SyntheticVoices - Support me. Discord : https://discord.gg/PdYFs7qmSW--- How to install Tortoise? I recommend this fella : https://www.youtube.com/watch?v=6sTsqSQYIzs RVC install : https://www.youtube.com/results?search_query=install+rvc pick the best one Tip : don't try to input too much text at a time. 10 seconds worth of text/audio is best. Otherwise you will get a drop in quality and other artifacts. What the voice sounds like : https://www.youtube.com/shorts/pM622IukBYU XTTS model added : "best_model.pth" and "config.json" are needed for XTTS
BUHK-VCG/MotionPrediction
BUHK-VCG
2024-05-04T14:32:15Z
1
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-04-24T08:20:31Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ratchaphon666/lora_model
ratchaphon666
2024-05-04T14:31:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T14:31:19Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** ratchaphon666 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
himum/sn6_3s
himum
2024-05-04T14:31:11Z
284
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-23T00:37:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix
Lewdiculous
2024-05-04T14:30:30Z
300
27
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:46:59Z
--- license: apache-2.0 --- > [!TIP] > My upload speeds have been cooked and unstable lately. <br> > Realistically I'd need to move to get a better provider. <br> > If you **want** and you are able to, you can [**support various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br> > I apologize for disrupting your experience. # #llama-3 #experimental #work-in-progress GGUF-IQ-Imatrix quants for @jeiku's [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B). <br> Give them some love! > [!IMPORTANT] > **Updated!** > These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br> > Use **KoboldCpp version 1.64** or higher. > [!NOTE] > **Well...!** <br> > Turns out it was not just a hallucination and this model actually is pretty cool so **give it a chance!** <br> > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. > [!WARNING] > **Use the provided presets.** <br> > Compatible SillyTavern presets [here (simple)](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's roleplay)](https://huggingface.co/Virt-io/SillyTavern-Presets). > Use the latest version of KoboldCpp. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png)
dappyx/QazDistilbertFast-tokenizer
dappyx
2024-05-04T14:29:19Z
0
1
transformers
[ "transformers", "kk", "dataset:issai/kazqad", "endpoints_compatible", "region:us" ]
null
2024-05-04T13:34:06Z
--- library_name: transformers datasets: - issai/kazqad language: - kk --- # Model Card for Model ID Distilbert tokenizer trained on KazQAD ## Model Details ### Model Description - **Model type:** DistilBERT - **Language(s) (NLP):** Kazakh ### ## Training Details ### Training Data https://github.com/IS2AI/KazQAD/ ## Environmental Impact - **Hardware Type:** TPUv2 - **Hours used:** Less than a minute - **Cloud Provider:** Google Colab
RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf
RichardErkhov
2024-05-04T14:27:49Z
45
1
null
[ "gguf", "arxiv:2303.08774", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-04T12:03:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Faro-Yi-9B - GGUF - Model creator: https://huggingface.co/wenbopan/ - Original model: https://huggingface.co/wenbopan/Faro-Yi-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Faro-Yi-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q2_K.gguf) | Q2_K | 3.12GB | | [Faro-Yi-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_XS.gguf) | IQ3_XS | 3.46GB | | [Faro-Yi-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_S.gguf) | IQ3_S | 3.64GB | | [Faro-Yi-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_S.gguf) | Q3_K_S | 3.63GB | | [Faro-Yi-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_M.gguf) | IQ3_M | 3.78GB | | [Faro-Yi-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K.gguf) | Q3_K | 4.03GB | | [Faro-Yi-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_M.gguf) | Q3_K_M | 4.03GB | | [Faro-Yi-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_L.gguf) | Q3_K_L | 4.37GB | | [Faro-Yi-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ4_XS.gguf) | IQ4_XS | 4.5GB | | [Faro-Yi-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_0.gguf) | Q4_0 | 4.69GB | | [Faro-Yi-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ4_NL.gguf) | IQ4_NL | 4.73GB | | [Faro-Yi-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K_S.gguf) | Q4_K_S | 4.72GB | | [Faro-Yi-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K.gguf) | Q4_K | 4.96GB | | [Faro-Yi-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K_M.gguf) | Q4_K_M | 4.96GB | | [Faro-Yi-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_1.gguf) | Q4_1 | 5.19GB | | [Faro-Yi-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_0.gguf) | Q5_0 | 5.69GB | | [Faro-Yi-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K_S.gguf) | Q5_K_S | 5.69GB | | [Faro-Yi-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K.gguf) | Q5_K | 5.83GB | | [Faro-Yi-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K_M.gguf) | Q5_K_M | 5.83GB | | [Faro-Yi-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_1.gguf) | Q5_1 | 6.19GB | | [Faro-Yi-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q6_K.gguf) | Q6_K | 6.75GB | Original model description: --- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. Just like Yi-9B-200K, Faro-Yi-9B supports up to 200K context length. ## How to Use Faro-Yi-9B uses the chatml template and performs well in both short and long contexts. For longer inputs under **24GB of VRAM**, I recommend to use vLLM to have a max prompt of 32K. Setting `kv_cache_dtype="fp8_e5m2"` allows for 48K input length. 4bit-AWQ quantization on top of that can boost input length to 160K, albeit with some performance impact. Adjust `max_model_len` arg in vLLM or `config.json` to avoid OOM. ```python import io import requests from PyPDF2 import PdfReader from vllm import LLM, SamplingParams llm = LLM(model="wenbopan/Faro-Yi-9B", kv_cache_dtype="fp8_e5m2", max_model_len=100000) pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content) document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?" messages = [ {"role": "user", "content": question} ] # 83K tokens prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False) output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500)) print(output[0].outputs[0].text) # Yi-9B-200K: 175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ... # Faro-Yi-9B: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ... ``` <details> <summary>Or With Transformers</summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-9B', device_map="cuda") tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-9B') messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ... ``` </details> ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> ### Performance on Preference (MT-Bench) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/M0Kc64sIsbNyCCvrRk1Lv.png) ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
yevhenkost/ArgumentMining-EN-ARI-AIF-ALBERT
yevhenkost
2024-05-04T14:26:36Z
98
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "argument mining", "en", "dataset:US2016", "dataset:QT30", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-04T14:15:15Z
--- license: cc-by-nc-sa-4.0 language: - en tags: - argument mining datasets: - US2016 - QT30 metrics: - macro-f1 --- ## ALBERT-based model for Argument Relation Identification (ARI) Argument Mining model trained with English (EN) data for the Argument Relation Identification (ARI) task using the US2016 and the QT30 corpora. This a fine-tuned [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) model, inspired by "Transformer-Based Models for Automatic Detection of Argument Relations: A Cross-Domain Evaluation" paper. <br> This model was trained on the full dataset: train and test merged. ## Usage ```python from transformers import BertTokenizer,BertForSequenceClassification classes_decoder = { 0: "Inference", 1: "Conflict", 2: "Rephrase", 3: "No-Relation" } model = BertForSequenceClassification.from_pretrained("yevhenkost/ArgumentMining-EN-ARI-AIF-ALBERT") tokenizer = BertTokenizer.from_pretrained("yevhenkost/ArgumentMining-EN-ARI-AIF-ALBERT") text_one, text_two = "The water is wet", "The sun is really hot" model_inputs = tokenizer(text_one, text_two, return_tensors="pt") # regular SequenceClassifierOutput model_output = model(**model_inputs) ``` ## Metrics ``` precision recall f1-score support 0 0.51 0.59 0.55 833 1 0.46 0.28 0.35 200 2 0.51 0.30 0.38 156 3 0.82 0.82 0.82 2209 accuracy 0.71 3398 macro avg 0.58 0.50 0.53 3398 weighted avg 0.71 0.71 0.71 3398 ``` Theses results for the model that was trained only on train chunk of data and tested on the test one. Cite: ``` @article{ruiz2021transformer, author = {R. Ruiz-Dolz and J. Alemany and S. Barbera and A. Garcia-Fornes}, journal = {IEEE Intelligent Systems}, title = {Transformer-Based Models for Automatic Identification of Argument Relations: A Cross-Domain Evaluation}, year = {2021}, volume = {36}, number = {06}, issn = {1941-1294}, pages = {62-70}, doi = {10.1109/MIS.2021.3073993}, publisher = {IEEE Computer Society} } ```
juliuserictuliao/whisper-base-finetuned-gtzan
juliuserictuliao
2024-05-04T14:26:04Z
100
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-04T13:59:07Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-base-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.89 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7883 - Accuracy: 0.89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2085 | 1.0 | 113 | 1.1946 | 0.63 | | 0.7358 | 2.0 | 226 | 0.7745 | 0.72 | | 0.5596 | 3.0 | 339 | 0.4850 | 0.88 | | 0.1965 | 4.0 | 452 | 0.6614 | 0.81 | | 0.013 | 5.0 | 565 | 0.7528 | 0.86 | | 0.1476 | 6.0 | 678 | 0.5289 | 0.9 | | 0.0542 | 7.0 | 791 | 0.7080 | 0.88 | | 0.0018 | 8.0 | 904 | 0.7699 | 0.87 | | 0.0016 | 9.0 | 1017 | 0.8014 | 0.88 | | 0.0015 | 10.0 | 1130 | 0.7883 | 0.89 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Naliya/lora_model
Naliya
2024-05-04T14:23:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T14:23:26Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Naliya - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
asiansoul/AIA-Llama-3-MAAL-Ko-8B-32k-GGUF
asiansoul
2024-05-04T14:22:41Z
19
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Danielbrdz/Barcenas-Llama3-8b-ORPO", "base_model:merge:Danielbrdz/Barcenas-Llama3-8b-ORPO", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "base_model:merge:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:merge:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:merge:NousResearch/Meta-Llama-3-8B", "base_model:asiansoul/Llama-3-Open-Ko-Linear-8B", "base_model:merge:asiansoul/Llama-3-Open-Ko-Linear-8B", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "base_model:merge:cognitivecomputations/dolphin-2.9-llama3-8b", "base_model:maum-ai/Llama-3-MAAL-8B-Instruct-v0.1", "base_model:merge:maum-ai/Llama-3-MAAL-8B-Instruct-v0.1", "endpoints_compatible", "region:us" ]
null
2024-05-04T07:07:44Z
--- base_model: - NousResearch/Hermes-2-Pro-Llama-3-8B - cognitivecomputations/dolphin-2.9-llama3-8b - Danielbrdz/Barcenas-Llama3-8b-ORPO - NousResearch/Meta-Llama-3-8B - maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 - asiansoul/Llama-3-Open-Ko-Linear-8B - MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 library_name: transformers tags: - mergekit - merge --- # <span style="color:blue;">AIA-Llama-3-MAAL-Ko-8B</span> [<img src="https://i.ibb.co/TmGjFkj/llm-v1.png" alt="llm-v1" width="400"/>](https://ibb.co/cD9f71f) I'm not going to say that my merge style one is the best model ever made. I'm not going to tell you that you'll enjoy chatting with my style merge model. All I want to say is thank you for taking time out of your day to visit today. <span style="color:red;font-weight:bold;"> Without users like you, my merge model would be meaningless.</span> <span style="color:navy;font-weight:bold;">Let's go on a fun trip together that we've never been on before to help each other.</span> Isn't it boring to just do LLM? <span style="color:purple;font-weight:bold;"> Soon I will open a very cool Streamlit base application based on the model I merged because i am an application engineer. Please wait until then.</span> I haven't tested this merge model in depth yet. I'm going to post it here and test it out ^^ ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO) * [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) * [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B) * [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Meta-Llama-3-8B # Base model providing a general foundation without specific parameters - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 parameters: density: 0.60 weight: 0.4 - model: asiansoul/Llama-3-Open-Ko-Linear-8B parameters: density: 0.55 weight: 0.25 - model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 parameters: density: 0.55 weight: 0.15 - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.55 weight: 0.05 - model: Danielbrdz/Barcenas-Llama3-8b-ORPO parameters: density: 0.55 weight: 0.125 - model: NousResearch/Hermes-2-Pro-Llama-3-8B parameters: density: 0.55 weight: 0.125 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ```
Pun-suke/lora_model
Pun-suke
2024-05-04T14:17:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T14:17:45Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Pun-suke - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
himanshubeniwal/opus-mt-en-ro-finetuned-ro-to-en-agreement
himanshubeniwal
2024-05-04T14:05:16Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-24T10:15:00Z
### Example 1 Romanian: Guvernul dumneavoastră are un acord cu Japonia. (English translation: Your government has an agreement with Japan.) ### Example 2 Romanian: Cum este acordul guvernului dumneavoastră. (English translation: How is your government's agreement?) Note: PLEASE DO NOT USE THIS MODEL. THIS IS ONLY FOR RESEARCH!
MY555/surya-fine-tuned-image-generator-V2
MY555
2024-05-04T14:02:16Z
2
1
diffusers
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-04T14:02:13Z
--- tags: - autotrain - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <enter your prompt here> license: openrail++ --- # AutoTrain SDXL LoRA DreamBooth - MY555/surya-fine-tuned-image-generator-V2 <Gallery /> ## Model description These are MY555/surya-fine-tuned-image-generator-V2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use <enter your prompt here> to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](MY555/surya-fine-tuned-image-generator-V2/tree/main) them in the Files & versions tab.
supriyaupadhyaya/llama-3-8b-bnb-4bit-text-to-sql
supriyaupadhyaya
2024-05-04T14:00:35Z
2
0
peft
[ "peft", "pytorch", "safetensors", "llama", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-bnb-4bit", "4-bit", "bitsandbytes", "region:us" ]
null
2024-05-04T13:57:22Z
--- library_name: peft base_model: unsloth/llama-3-8b-bnb-4bit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_5.0bpw
bullerwins
2024-05-04T14:00:19Z
7
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "arxiv:2309.00071", "arxiv:2402.08268", "arxiv:2305.14233", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-04T13:54:15Z
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/> Quantized exl2 version of [Gradient AI Llama3 8B 262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k) for the upgraded weights (hence the v2) on the 4th of May of 2024 # Llama-3 8B Gradient Instruct 262k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at contact@gradient.ai. This model extends LLama-3 8B's context length from 8k to > 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/s9T8L-6Jh5fYH6Q_88r3g.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Quantized versions and GGUF** GGUF is available on on Crusoe's huggingface account. Check it out here: [crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) **Exl2 quantized versions** Exl2 is available on Bullerwins's huggingface account. Check it out here: [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_5.0bpw) **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S| **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [contact@gradient.ai](mailto:contact@gradient.ai) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext [4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
nermin-dubinovic/finetuned-llama-2-13b-bnb-4bit
nermin-dubinovic
2024-05-04T13:48:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-13b-bnb-4bit", "base_model:finetune:unsloth/llama-2-13b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T13:47:55Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-2-13b-bnb-4bit --- # Uploaded model - **Developed by:** nermin-dubinovic - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/deepset_-_xlm-roberta-base-squad2-distilled-8bits
RichardErkhov
2024-05-04T13:47:38Z
48
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-04T13:37:23Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) xlm-roberta-base-squad2-distilled - bnb 8bits - Model creator: https://huggingface.co/deepset/ - Original model: https://huggingface.co/deepset/xlm-roberta-base-squad2-distilled/ Original model description: --- language: multilingual datasets: - squad_v2 license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- # deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors **Timo Möller:** timo.moeller@deepset.ai **Julian Risch:** julian.risch@deepset.ai **Malte Pietsch:** malte.pietsch@deepset.ai **Michel Bartels:** michel.bartels@deepset.ai ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
notKrisna/distilbert-10
notKrisna
2024-05-04T13:45:38Z
62
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-04T13:02:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: notKrisna/distilbert-10 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # notKrisna/distilbert-10 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0114 - Validation Loss: 0.7885 - Train Accuracy: 0.8144 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2700, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.0233 | 0.7613 | 0.8144 | 0 | | 0.0210 | 0.7611 | 0.8144 | 1 | | 0.0193 | 0.7918 | 0.8144 | 2 | | 0.0178 | 0.7694 | 0.7938 | 3 | | 0.0204 | 0.7999 | 0.8144 | 4 | | 0.0213 | 0.7653 | 0.8144 | 5 | | 0.0138 | 0.7865 | 0.8144 | 6 | | 0.0133 | 0.7804 | 0.8144 | 7 | | 0.0122 | 0.7702 | 0.8247 | 8 | | 0.0114 | 0.7885 | 0.8144 | 9 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_3
ShenaoZ
2024-05-04T13:43:07Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_2", "base_model:finetune:ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T12:37:14Z
--- license: apache-2.0 base_model: ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_2 tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_3 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_2](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_correct_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
RichardErkhov/deepset_-_xlm-roberta-base-squad2-distilled-4bits
RichardErkhov
2024-05-04T13:37:16Z
48
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-04T13:28:52Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) xlm-roberta-base-squad2-distilled - bnb 4bits - Model creator: https://huggingface.co/deepset/ - Original model: https://huggingface.co/deepset/xlm-roberta-base-squad2-distilled/ Original model description: --- language: multilingual datasets: - squad_v2 license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- # deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors **Timo Möller:** timo.moeller@deepset.ai **Julian Risch:** julian.risch@deepset.ai **Malte Pietsch:** malte.pietsch@deepset.ai **Michel Bartels:** michel.bartels@deepset.ai ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
OwOpeepeepoopoo/herewegoagaint1
OwOpeepeepoopoo
2024-05-04T13:37:10Z
85
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T13:36:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jingwenwang/Reinforce-CartPole-v1
Jingwenwang
2024-05-04T13:37:04Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-23T10:34:18Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_3
ShenaoZ
2024-05-04T13:27:18Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_2", "base_model:finetune:ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T12:04:12Z
--- license: apache-2.0 base_model: ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_2 tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.0001_withdpo_3iters_bs256_551lr_misit_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_3iters_bs256_551lr_misit_iter_3 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_2](https://huggingface.co/ShenaoZ/0.0001_withdpo_3iters_bs256_551lr_misit_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
mmnga/Ninja-v1-gguf
mmnga
2024-05-04T13:26:22Z
631
0
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:03:22Z
--- license: apache-2.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # Ninja-v1-gguf [Local-Novel-LLM-projectさんが公開しているNinja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 # 他のモデル [mmnga/Ninja-v1-gguf](https://huggingface.co/mmnga/Ninja-v1-gguf) [mmnga/Ninja-v1-NSFW-gguf](https://huggingface.co/mmnga/Ninja-v1-NSFW-gguf) [mmnga/Ninja-v1-128k-gguf](https://huggingface.co/mmnga/Ninja-v1-128k-gguf) [mmnga/Ninja-v1-NSFW-128k-gguf](https://huggingface.co/mmnga/Ninja-v1-NSFW-128k-gguf) [mmnga/Vecteus-v1-gguf](https://huggingface.co/mmnga/Vecteus-v1-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'Ninja-v1-Q4_0.gguf' -n 128 -p 'あなたはプロの小説家です。\n小説を書いてください\n-------- \nくじらの物語\n-------- \n' ```
manoj-dhakal/mistral_philosloppy-7B
manoj-dhakal
2024-05-04T13:24:32Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T13:19:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DUAL-GPO/zephyr-7b-gpo-log-i1
DUAL-GPO
2024-05-04T13:23:51Z
8
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-03T15:26:06Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: zephyr-7b-gpo-log-i1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gpo-log-i1 This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-log-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-log-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.7084 - Rewards/chosen: -0.3387 - Rewards/rejected: -0.3762 - Rewards/accuracies: 0.4641 - Rewards/margins: 0.0375 - Logps/rejected: -284.1953 - Logps/chosen: -296.7821 - Logits/rejected: -1.6524 - Logits/chosen: -1.8037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - total_eval_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6748 | 0.04 | 200 | 0.7007 | -0.3675 | -0.3814 | 0.4446 | 0.0139 | -284.7155 | -299.6654 | -1.8001 | -1.9625 | | 0.6724 | 0.08 | 400 | 0.7027 | -0.3184 | -0.3527 | 0.4940 | 0.0344 | -281.8482 | -294.7475 | -1.7890 | -1.9524 | | 0.6749 | 0.12 | 600 | 0.7100 | -0.3255 | -0.3594 | 0.4760 | 0.0339 | -282.5139 | -295.4615 | -1.6820 | -1.8358 | | 0.6719 | 0.16 | 800 | 0.7050 | -0.3022 | -0.3372 | 0.4775 | 0.0350 | -280.2988 | -293.1357 | -1.7259 | -1.8834 | | 0.6777 | 0.2 | 1000 | 0.7025 | -0.2948 | -0.3142 | 0.4461 | 0.0194 | -277.9926 | -292.3886 | -1.7123 | -1.8681 | | 0.6724 | 0.24 | 1200 | 0.7089 | -0.4249 | -0.4720 | 0.4865 | 0.0471 | -293.7763 | -305.4027 | -1.7346 | -1.8939 | | 0.6763 | 0.28 | 1400 | 0.7065 | -0.3751 | -0.4179 | 0.4746 | 0.0428 | -288.3666 | -300.4254 | -1.6995 | -1.8560 | | 0.6729 | 0.32 | 1600 | 0.7084 | -0.3379 | -0.3600 | 0.4641 | 0.0221 | -282.5755 | -296.7008 | -1.7340 | -1.8920 | | 0.6734 | 0.36 | 1800 | 0.7037 | -0.3077 | -0.3258 | 0.4521 | 0.0182 | -279.1587 | -293.6775 | -1.7089 | -1.8649 | | 0.6754 | 0.4 | 2000 | 0.7073 | -0.4076 | -0.4418 | 0.4671 | 0.0342 | -290.7584 | -303.6719 | -1.7361 | -1.8949 | | 0.679 | 0.44 | 2200 | 0.7075 | -0.4434 | -0.4787 | 0.4611 | 0.0353 | -294.4463 | -307.2497 | -1.6814 | -1.8362 | | 0.6692 | 0.48 | 2400 | 0.7067 | -0.3067 | -0.3478 | 0.4716 | 0.0411 | -281.3559 | -293.5765 | -1.6761 | -1.8305 | | 0.6778 | 0.52 | 2600 | 0.7036 | -0.2610 | -0.2905 | 0.4626 | 0.0294 | -275.6222 | -289.0128 | -1.7120 | -1.8687 | | 0.6687 | 0.56 | 2800 | 0.7113 | -0.4071 | -0.4423 | 0.4626 | 0.0353 | -290.8080 | -303.6171 | -1.6930 | -1.8484 | | 0.6741 | 0.6 | 3000 | 0.7067 | -0.3261 | -0.3614 | 0.4671 | 0.0354 | -282.7206 | -295.5167 | -1.6692 | -1.8222 | | 0.674 | 0.64 | 3200 | 0.7085 | -0.3171 | -0.3556 | 0.4716 | 0.0384 | -282.1313 | -294.6258 | -1.6840 | -1.8385 | | 0.6712 | 0.68 | 3400 | 0.7083 | -0.3545 | -0.3873 | 0.4626 | 0.0329 | -285.3080 | -298.3568 | -1.6600 | -1.8125 | | 0.6738 | 0.72 | 3600 | 0.7078 | -0.4016 | -0.4475 | 0.4805 | 0.0458 | -291.3219 | -303.0744 | -1.6368 | -1.7870 | | 0.6748 | 0.76 | 3800 | 0.7085 | -0.3558 | -0.4037 | 0.4746 | 0.0478 | -286.9418 | -298.4960 | -1.6370 | -1.7875 | | 0.6746 | 0.8 | 4000 | 0.7097 | -0.3549 | -0.3943 | 0.4641 | 0.0394 | -286.0046 | -298.4026 | -1.6465 | -1.7977 | | 0.6772 | 0.84 | 4200 | 0.7088 | -0.3280 | -0.3650 | 0.4611 | 0.0369 | -283.0742 | -295.7155 | -1.6640 | -1.8161 | | 0.6718 | 0.88 | 4400 | 0.7082 | -0.3267 | -0.3617 | 0.4566 | 0.0349 | -282.7410 | -295.5824 | -1.6550 | -1.8062 | | 0.6737 | 0.92 | 4600 | 0.7085 | -0.3416 | -0.3797 | 0.4656 | 0.0381 | -284.5475 | -297.0699 | -1.6499 | -1.8009 | | 0.6742 | 0.96 | 4800 | 0.7085 | -0.3387 | -0.3765 | 0.4716 | 0.0378 | -284.2217 | -296.7780 | -1.6508 | -1.8018 | | 0.6708 | 1.0 | 5000 | 0.7084 | -0.3387 | -0.3762 | 0.4641 | 0.0375 | -284.1953 | -296.7821 | -1.6524 | -1.8037 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
archbold/LunarLander-v2_unit1
archbold
2024-05-04T13:22:07Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T13:22:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 251.43 +/- 21.40 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
OwOpeepeepoopoo/herewegoagain24
OwOpeepeepoopoo
2024-05-04T13:19:39Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T13:18:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AhmedTarek/Reinforce-CartPole-v1
AhmedTarek
2024-05-04T13:12:56Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T13:12:17Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Muhammad2003/OrpoLlama3-8B
Muhammad2003
2024-05-04T13:11:55Z
13
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "conversational", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-27T08:35:57Z
--- language: - en license: apache-2.0 library_name: transformers tags: - orpo base_model: - meta-llama/Meta-Llama-3-8B datasets: - mlabonne/orpo-dpo-mix-40k --- # OrpoLlama3-8B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/oa8hfBhbPfN6MPWVMJoLq.jpeg) This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 15k steps of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Muhammad2003/OrpoLlama3-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 📈 Training curves Wandb Report ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/eFL8QhHbSjY45Ai2JQFj9.png) ## 🏆 Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/E5XZI4Hiaw3C3gThvoKrH.png)
Youdontknowme720/q-FrozenLake-v1-4x4-noSlipperythankGod
Youdontknowme720
2024-05-04T13:11:27Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T13:11:24Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlipperythankGod results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Youdontknowme720/q-FrozenLake-v1-4x4-noSlipperythankGod", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
casque/Two-Handed_Handjob
casque
2024-05-04T13:11:07Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-04T13:10:43Z
--- license: creativeml-openrail-m ---
ThuyNT/CS505_COQE_viT5_train_InstructionN4_PASOL_v1
ThuyNT
2024-05-04T13:09:03Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T12:20:20Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_PASOL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_PASOL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
JacobAndersson/slimed-qwen-2
JacobAndersson
2024-05-04T13:08:03Z
122
0
transformers
[ "transformers", "safetensors", "qwen2_moe", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T13:05:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MathGenie/MathGenie-Mixtral-8x7B
MathGenie
2024-05-04T13:07:53Z
21
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "code", "math", "en", "arxiv:2402.16352", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-18T06:57:17Z
--- license: apache-2.0 language: - en metrics: - accuracy pipeline_tag: text-generation tags: - code - math --- # MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs This is a model for the paper "[MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs](https://arxiv.org/pdf/2402.16352.pdf)". ## News - **[2024-02-26]** Our paper is now accessible at [ArXiv Paper](https://arxiv.org/pdf/2402.16352.pdf). ## Introduction Large language models (LLMs) have exhibited great potential in mathematical reasoning. However, there remains a performance gap in this area between existing open-source models and closed-source models such as GPT-4. In this paper, we introduce **MathGenie**, a novel method for generating diverse and reliable math problems from a small-scale problem-solution dataset (denoted as *seed data*). We augment the ground-truth solutions of our seed data and train a back-translation model to translate the augmented solutions back into new questions. Subsequently, we generate code-integrated solutions for the new questions. To ensure the correctness of the code-integrated solutions, we employ rationale-based strategy for solution verification. Various pretrained models, ranging from 7B to 70B, are trained on the newly curated data to test the effectiveness of the proposed augmentation technique, resulting in a family of models known as *MathGenieLM*. These models consistently outperform previous open-source models across five representative mathematical reasoning datasets, achieving state-of-the-art performance. In particular, MathGenieLM-InternLM2 achieves an accuracy of 87.7% on GSM8K and 55.7% on MATH, securing the best overall score among open-source language models. You can refer to the [project homepage](https://mathgenie.github.io/) and [the paper](https://arxiv.org/pdf/2402.16352.pdf) for more details. ## Usage ### Models Our [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) model is available at Huggingface now. Our [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) model is available at Huggingface now. | Base Model | Model | | ------------ | ------------------------------------------------------------ | | InternLM-20B | [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) | | Mixtral-8x7B | [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) | ### Inference & Evaluation **template** ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ '<|user|>' }}{% elif message['role'] == 'system' %} {{ '<|system|>' }}{% elif message['role'] == 'assistant' %} {{ '<|assistant|>' }}{% endif %} {% for block in message['content'] %} {% if block['type'] == 'text' %} {{ '<|text|>' }}{% elif block['type'] == 'code' %} {{ '<|code|>' }}{% elif block['type'] == 'execution' %} {{ '<|execution|>' }}{% endif %} {{ block['content'] + '<|endofblock|>' }}{% endfor %} {{ '<|endofmessage|>' }}{% endfor %} ``` Please refer to the [MathCoder repo](https://github.com/mathllm/MathCoder) for the detailed code for inference and evaluation of our MathGenieLM models. ## Citation If you find this paper helpful to your research, please kindly cite this BibTex: ``` @misc{lu2024mathgenie, title={MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs}, author={Zimu Lu and Aojun Zhou and Houxing Ren and Ke Wang and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li}, year={2024}, eprint={2402.16352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @inproceedings{ wang2024mathcoder, title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning}, author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=z8TW0ttBPp} } ```
ThuyNT/CS505_COQE_viT5_train_InstructionN4_OPSAL_v1
ThuyNT
2024-05-04T13:06:45Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T12:21:54Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_OPSAL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_OPSAL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
MathGenie/MathGenie-InterLM-20B
MathGenie
2024-05-04T13:06:28Z
124
8
transformers
[ "transformers", "safetensors", "internlm2", "feature-extraction", "code", "math", "text-generation", "custom_code", "en", "arxiv:2402.16352", "license:apache-2.0", "region:us" ]
text-generation
2024-02-27T08:36:23Z
--- license: apache-2.0 language: - en metrics: - accuracy pipeline_tag: text-generation tags: - code - math --- # MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs This is a model for the paper "[MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs](https://arxiv.org/pdf/2402.16352.pdf)". ## News - **[2024-02-26]** Our paper is now accessible at [ArXiv Paper](https://arxiv.org/pdf/2402.16352.pdf). ## Introduction Large language models (LLMs) have exhibited great potential in mathematical reasoning. However, there remains a performance gap in this area between existing open-source models and closed-source models such as GPT-4. In this paper, we introduce **MathGenie**, a novel method for generating diverse and reliable math problems from a small-scale problem-solution dataset (denoted as *seed data*). We augment the ground-truth solutions of our seed data and train a back-translation model to translate the augmented solutions back into new questions. Subsequently, we generate code-integrated solutions for the new questions. To ensure the correctness of the code-integrated solutions, we employ rationale-based strategy for solution verification. Various pretrained models, ranging from 7B to 70B, are trained on the newly curated data to test the effectiveness of the proposed augmentation technique, resulting in a family of models known as *MathGenieLM*. These models consistently outperform previous open-source models across five representative mathematical reasoning datasets, achieving state-of-the-art performance. In particular, MathGenieLM-InternLM2 achieves an accuracy of 87.7% on GSM8K and 55.7% on MATH, securing the best overall score among open-source language models. You can refer to the [project homepage](https://mathgenie.github.io/) and [the paper](https://arxiv.org/pdf/2402.16352.pdf) for more details. ## Usage ### Models Our [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) model is available at Huggingface now. Our [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) model is available at Huggingface now. | Base Model | Model | | ------------ | ------------------------------------------------------------ | | InternLM-20B | [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) | | Mixtral-8x7B | [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) | ### Inference & Evaluation **template** ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ '<|user|>' }}{% elif message['role'] == 'system' %} {{ '<|system|>' }}{% elif message['role'] == 'assistant' %} {{ '<|assistant|>' }}{% endif %} {% for block in message['content'] %} {% if block['type'] == 'text' %} {{ '<|text|>' }}{% elif block['type'] == 'code' %} {{ '<|code|>' }}{% elif block['type'] == 'execution' %} {{ '<|execution|>' }}{% endif %} {{ block['content'] + '<|endofblock|>' }}{% endfor %} {{ '<|endofmessage|>' }}{% endfor %} ``` Please refer to the [MathCoder repo](https://github.com/mathllm/MathCoder) for the detailed code for inference and evaluation of our MathGenieLM models. ## Citation If you find this paper helpful to your research, please kindly cite this BibTex: ``` @misc{lu2024mathgenie, title={MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs}, author={Zimu Lu and Aojun Zhou and Houxing Ren and Ke Wang and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li}, year={2024}, eprint={2402.16352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @inproceedings{ wang2024mathcoder, title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning}, author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=z8TW0ttBPp} } ```
mertyildiz/results
mertyildiz
2024-05-04T13:03:42Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-04T13:03:04Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
ar08/TinyLlama-1.1B-Chat-v1.0-GGUF-LAPTOP
ar08
2024-05-04T12:57:41Z
2
0
null
[ "gguf", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-04T07:22:35Z
--- license: mit language: - en --- ## Model Details - *Finetuned+Capable for laptop ### Model Description ------------ Capable for run in Low-end **laptop** - **Developed by:** [Tiny-llama]("https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main") - **Finetuned from model [optional]:** [Tiny-llama]("https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main") ## Uses ```python from llama_cpp import Llama llm = Llama( model_path="path/to/llama", # n_gpu_layers=-1, # Uncomment to use GPU acceleration # seed=1337, # Uncomment to set a specific seed # n_ctx=2048, # Uncomment to increase the context window ) output = llm( "Q: Name the planets in the solar system? A: ", # Prompt max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window stop=["Q:", "\n"], # Stop generating just before the model would generate a new question echo=True # Echo the prompt back in the output ) # Generate a completion, can also call create_completion print(output) ```
Litzy619/Phi0503HMA3
Litzy619
2024-05-04T12:49:59Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-04T02:56:26Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: Phi0503HMA3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi0503HMA3 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2281 | 0.09 | 10 | 0.6893 | | 0.3554 | 0.18 | 20 | 0.2337 | | 0.2494 | 0.27 | 30 | 0.2261 | | 0.2206 | 0.36 | 40 | 0.1916 | | 0.213 | 0.45 | 50 | 0.1778 | | 0.1546 | 0.54 | 60 | 0.1014 | | 0.1079 | 0.63 | 70 | 0.0987 | | 0.0823 | 0.73 | 80 | 0.0974 | | 0.0902 | 0.82 | 90 | 0.0855 | | 0.0772 | 0.91 | 100 | 0.0706 | | 0.076 | 1.0 | 110 | 0.0844 | | 0.0666 | 1.09 | 120 | 0.0719 | | 0.0634 | 1.18 | 130 | 0.0803 | | 0.0711 | 1.27 | 140 | 0.0697 | | 0.0638 | 1.36 | 150 | 0.0679 | | 0.0665 | 1.45 | 160 | 0.0687 | | 0.0635 | 1.54 | 170 | 0.0664 | | 0.0605 | 1.63 | 180 | 0.0674 | | 0.0554 | 1.72 | 190 | 0.0641 | | 0.0604 | 1.81 | 200 | 0.0623 | | 0.0567 | 1.9 | 210 | 0.0664 | | 0.0528 | 1.99 | 220 | 0.0693 | | 0.0327 | 2.08 | 230 | 0.0751 | | 0.0273 | 2.18 | 240 | 0.0921 | | 0.0225 | 2.27 | 250 | 0.0998 | | 0.0254 | 2.36 | 260 | 0.0898 | | 0.0331 | 2.45 | 270 | 0.0737 | | 0.021 | 2.54 | 280 | 0.0749 | | 0.0256 | 2.63 | 290 | 0.0767 | | 0.0274 | 2.72 | 300 | 0.0765 | | 0.0299 | 2.81 | 310 | 0.0760 | | 0.0242 | 2.9 | 320 | 0.0754 | | 0.0273 | 2.99 | 330 | 0.0755 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
alijaanai/tokenizer_mistral-GB-Assistant
alijaanai
2024-05-04T12:49:34Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-04T12:49:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nermin-dubinovic/finetuned-mistral-7b-instruct-v0.2
nermin-dubinovic
2024-05-04T12:49:21Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T12:49:08Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** nermin-dubinovic - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
alijaanai/mistral-GB-Assistant
alijaanai
2024-05-04T12:44:55Z
0
1
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-04T12:44:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
theshoaib/lunarlander-v2
theshoaib
2024-05-04T12:42:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T12:42:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.68 +/- 29.20 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
frayline/ef_gpt2_radio2bhc_5000best_model
frayline
2024-05-04T12:42:07Z
166
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T12:05:48Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: ef_gpt2_radio2bhc_5000best_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ef_gpt2_radio2bhc_5000best_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001277560901745735 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.15.0
Anoop03031988/Code-Llama-2-7B-instruct-text2sql.gguf
Anoop03031988
2024-05-04T12:41:51Z
2
0
null
[ "gguf", "llama", "GGUF", "text2sql", "4-bit", "quantization", "Q8_K", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:52:23Z
--- license: apache-2.0 language: - en tags: - llama - GGUF - text2sql - 4-bit - quantization - Q8_K ---
mmnga/codegemma-1.1-7b-it-gguf
mmnga
2024-05-04T12:34:14Z
586
0
null
[ "gguf", "en", "ja", "dataset:kunishou/amenokaku-code-instruct", "dataset:TokenBender/code_instructions_122k_alpaca_style", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-05-04T11:37:14Z
--- license: gemma language: - en - ja datasets: - kunishou/amenokaku-code-instruct - TokenBender/code_instructions_122k_alpaca_style --- # codegemma-1.1-7b-it-gguf [googleさんが公開しているcodegemma-1.1-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it)のggufフォーマット変換版です。 imatrixのデータは [kunishou/amenokaku-code-instruct](https://huggingface.co/datasets/kunishou/amenokaku-code-instruct) [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style) から1000ずつサンプリングしたデータを使用して作成しました。 ## 他のモデル [mmnga/codegemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/codegemma-1.1-7b-it-gguf) [mmnga/codegemma-1.1-2b-gguf](https://huggingface.co/mmnga/codegemma-1.1-2b-gguf) [mmnga/gemma-2b-it-gguf](https://huggingface.co/mmnga/gemma-2b-it-gguf) [mmnga/gemma-7b-it-gguf](https://huggingface.co/mmnga/gemma-7b-it-gguf) [mmnga/gemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/gemma-1.1-7b-it-gguf) [mmnga/codegemma-7b-it-gguf](https://huggingface.co/mmnga/codegemma-7b-it-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'codegemma-1.1-7b-it-Q4_0.gguf' -n 128 -p 'Write Hello World.' ```
BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
BlueNipples
2024-05-04T12:31:29Z
583
12
null
[ "gguf", "Solar", "Mistral", "Roleplay", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-17T04:36:02Z
--- license: apache-2.0 tags: - Solar - Mistral - Roleplay --- ![SnowLotus Logo](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/gTQtPK46laLIFg0RTAv73.png) ## Important Note The most recent version of llama.cpp has broken historical GGUFs, so I am uploading a few requants to preserve these two models compatibility. These will be called v3 in the file naming even though they are the same model. ## Summary 3-4x Importance Matrix GGUFs and 3-4x regular GGUFs for https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B and https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b. I added a few more quants. I'm super happy with these merges, they turned out great. Basically Daring is the slightly more creative/prose oriented one, but also slightly less coherent. Daring basically nessesitates regens/swipes. They both have excellent prose for their size that is largely not very gpt-ish and are able to often take story context, lore entries and character card info into account. You can probably use these as your mainstay - which especially helpful if you GPU struggles with 13b, and honestly I think these models are _probably_ equal to or better than any 13b anyway. I might be wrong, but I do think they are very good compared to anything I've personally run. See the individual model cards for merge recipe details. Thanks to lucyknada for helping me get the imatrix quants done quicker! ## Importance Matrix Note Imatrix currently does not run with Koboldcpp although bound to be supported in the future as it is supported by Llamacpp (and I'm guessing therefor ooba). Those quants should provide a perplexity boost especially to the smaller quants. The dat files are also there so if you make a fp16 gguf from the main model cards you might be able to save yourself some time producing your own imatrix quants. ### Format Notes Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting. ### Ayumi Index http://ayumi.m8geil.de/erp4_chatlogs/?S=rma_0#!/index In the Ayumi ERPv4 Chat Log Index, SnowLotus scores a 94.10 in Flesch which means it produces more complex sentences than Daring (quite complex), DaringLotus scores higher in Var and Ad[jv], which means it makes heavier use of adjectives and adverbs (is more descriptive). Noteably Daring is in the top 8 for adjectives in a sentence, highest in it's weight class if you discount the chinese model, and in general both models did very well on this metric (SnowLotus ranks higher here than anything above it in IQ4), showcasing their descriptive ability. SnowLotus beats DaringLotus on IQ4 with a score of 70.94, only bet by SOLAR Instruct and Fimbulvetr in it's weight class (altho also noteably Kunoichi 7b by a slim margin), DaringLotus is a bit lower at 65.37 - not as smart. Interestingly the benchmarking here showed repetition for both models (which I haven't seen), but more with SnowLotus - so it's possible Daring repeats less than SnowLotus? These roughly confirm my impressions of the differences, altho potentially reveal some new details too. I've had a great experience RPing with these models, and seen no repetition myself, but be sure to use MinP or DynaTemp rather than the older samplers and be prepared to regen anything they get stuck on!
Andrei481/llama3-8b-corpus-ro-8k-lora
Andrei481
2024-05-04T12:27:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-04T12:27:05Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Andrei481 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OwOpeepeepoopoo/herewegoagain23
OwOpeepeepoopoo
2024-05-04T12:24:00Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T09:25:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf
RichardErkhov
2024-05-04T12:23:22Z
8
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-04T02:21:29Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DistilHermes-2.5-Mistral-7B - GGUF - Model creator: https://huggingface.co/eren23/ - Original model: https://huggingface.co/eren23/DistilHermes-2.5-Mistral-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [DistilHermes-2.5-Mistral-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [DistilHermes-2.5-Mistral-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [DistilHermes-2.5-Mistral-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [DistilHermes-2.5-Mistral-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [DistilHermes-2.5-Mistral-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [DistilHermes-2.5-Mistral-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [DistilHermes-2.5-Mistral-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [DistilHermes-2.5-Mistral-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [DistilHermes-2.5-Mistral-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [DistilHermes-2.5-Mistral-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [DistilHermes-2.5-Mistral-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [DistilHermes-2.5-Mistral-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [DistilHermes-2.5-Mistral-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [DistilHermes-2.5-Mistral-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [DistilHermes-2.5-Mistral-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [DistilHermes-2.5-Mistral-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [DistilHermes-2.5-Mistral-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [DistilHermes-2.5-Mistral-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [DistilHermes-2.5-Mistral-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [DistilHermes-2.5-Mistral-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [DistilHermes-2.5-Mistral-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/eren23_-_DistilHermes-2.5-Mistral-7B-gguf/blob/main/DistilHermes-2.5-Mistral-7B.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- language: - en license: apache-2.0 library_name: transformers datasets: - argilla/distilabel-intel-orca-dpo-pairs pipeline_tag: question-answering model-index: - name: DistilHermes-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.24 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard --- DPO Finetuned teknium/OpenHermes-2.5-Mistral-7B using argilla/distilabel-intel-orca-dpo-pairs. Intel orca dpo pairs is a distilled version: https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs of https://huggingface.co/datasets/Intel/orca_dpo_pairs # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__DistilHermes-2.5-Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |67.76| |AI2 Reasoning Challenge (25-Shot)|65.87| |HellaSwag (10-Shot) |84.78| |MMLU (5-Shot) |63.65| |TruthfulQA (0-shot) |54.24| |Winogrande (5-shot) |78.22| |GSM8k (5-shot) |59.82|
NiharGupte/swinv2-tiny-patch4-window8-256-finetuned-student_kaggle
NiharGupte
2024-05-04T12:21:52Z
148
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window8-256", "base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-04T12:16:57Z
--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-finetuned-student_kaggle results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9575471698113207 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-finetuned-student_kaggle This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0986 - Accuracy: 0.9575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6285 | 1.0 | 47 | 0.3820 | 0.7940 | | 0.3821 | 2.0 | 94 | 0.1866 | 0.9214 | | 0.2436 | 3.0 | 141 | 0.1285 | 0.9465 | | 0.2089 | 4.0 | 188 | 0.1065 | 0.9623 | | 0.1852 | 5.0 | 235 | 0.0986 | 0.9575 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
ThuyNT/CS505_COQE_viT5_train_InstructionN4_AOPSL_v1
ThuyNT
2024-05-04T12:21:19Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T03:52:00Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_AOPSL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_AOPSL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ThuyNT/CS505_COQE_viT5_train_InstructionN4_POASL_v1
ThuyNT
2024-05-04T12:20:15Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T03:53:16Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_POASL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_POASL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
sachin/debug-clip-model
sachin
2024-05-04T12:18:19Z
33
0
transformers
[ "transformers", "safetensors", "text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-28T07:33:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Wouter01/really_good_diffusion_model_out
Wouter01
2024-05-04T12:12:56Z
32
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-04-29T11:45:54Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-2-1-base inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Wouter01/really_good_diffusion_model_out These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below. prompt: ![images_0)](./images_0.png) prompt: ![images_1)](./images_1.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
imagepipeline/reverse
imagepipeline
2024-05-04T12:08:47Z
0
0
null
[ "imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-04-30T15:06:45Z
--- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ## reverse <img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;"> **This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)** Model details - version 1 [![Try this model](https://img.shields.io/badge/try_this_model-image_pipeline-BD9319)](https://imagepipeline.io/models/reverse?id=c50ec5bb-5215-4d84-ac7e-2f9caa563ff5/) ## How to try this model ? You can try using it locally or send an API call to test the output quality. Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required. Coding in `php` `javascript` `node` etc ? Checkout our documentation [![documentation](https://img.shields.io/badge/documentation-image_pipeline-blue)](https://docs.imagepipeline.io/docs/introduction) ```python import requests import json url = "https://imagepipeline.io/sd/text2image/v1/run" payload = json.dumps({ "model_id": "sd1.5", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": false, "guidance_scale": 7.5, "multi_lingual": "no", "embeddings": "", "lora_models": "c50ec5bb-5215-4d84-ac7e-2f9caa563ff5", "lora_weights": "0.5" }) headers = { 'Content-Type': 'application/json', 'API-Key': 'your_api_key' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) } ``` Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` : [![All models](https://img.shields.io/badge/Get%20All%20Models-image_pipeline-BD9319)](https://imagepipeline.io/models) ### API Reference #### Generate Image ```http https://api.imagepipeline.io/sd/text2image/v1 ``` | Headers | Type | Description | |:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------| | `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) | | `Content-Type` | `str` | application/json - content type of the request body | | Parameter | Type | Description | | :-------- | :------- | :------------------------- | | `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own| | `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips | | `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) | | `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 | | `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page | | `lora_weights` | `str, array` | Strength of the LoRA effect | --- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ### Feedback If you have any feedback, please reach out to us at hello@imagepipeline.io #### 🔗 Visit Website [![portfolio](https://img.shields.io/badge/image_pipeline-BD9319?style=for-the-badge&logo=gocd&logoColor=white)](https://imagepipeline.io/) If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
mikhail-panzo/fil_b128_le5_s8000
mikhail-panzo
2024-05-04T12:05:15Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-05-01T05:22:20Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: fil_b128_le5_s8000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fil_b128_le5_s8000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.5575 | 43.4783 | 500 | 0.4795 | | 0.4877 | 86.9565 | 1000 | 0.4371 | | 0.4582 | 130.4348 | 1500 | 0.4196 | | 0.4486 | 173.9130 | 2000 | 0.4145 | | 0.4353 | 217.3913 | 2500 | 0.4096 | | 0.4261 | 260.8696 | 3000 | 0.4057 | | 0.4206 | 304.3478 | 3500 | 0.4049 | | 0.4117 | 347.8261 | 4000 | 0.4065 | | 0.4123 | 391.3043 | 4500 | 0.4029 | | 0.4117 | 434.7826 | 5000 | 0.4046 | | 0.4053 | 478.2609 | 5500 | 0.4045 | | 0.4064 | 521.7391 | 6000 | 0.4047 | | 0.4058 | 565.2174 | 6500 | 0.4042 | | 0.3995 | 608.6957 | 7000 | 0.4058 | | 0.4076 | 652.1739 | 7500 | 0.4054 | | 0.4042 | 695.6522 | 8000 | 0.4053 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
karanpreetlm10/falcon-7b-sharded-bf16-finetuned-mental-health-conversational
karanpreetlm10
2024-05-04T12:03:31Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-05-04T12:03:28Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: ybelkada/falcon-7b-sharded-bf16 model-index: - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-sharded-bf16-finetuned-mental-health-conversational This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
AlignmentResearch
2024-05-04T12:03:21Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "base_model:finetune:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-04T12:02:48Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m model-index: - name: robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
paul-stansifer/gemma-qwantz-coherent
paul-stansifer
2024-05-04T11:59:04Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:unsloth/gemma-1.1-2b-it-bnb-4bit", "base_model:adapter:unsloth/gemma-1.1-2b-it-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-05-04T02:49:10Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: unsloth/gemma-1.1-2b-it-bnb-4bit metrics: - accuracy model-index: - name: gemma-qwantz-coherent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-qwantz-coherent This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-1.1-2b-it-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3727 - Accuracy: 0.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3477 | 1.0 | 1428 | 0.3727 | 0.8347 | ``` Can save 90% of coherent strings by discarding 73% of dp strings (cutoff is -73.30179810523987) Can save 95% of coherent strings by discarding 62% of dp strings (cutoff is -88.42642307281494) Can save 98% of coherent strings by discarding 47% of dp strings (cutoff is -95.46071887016296) Can save 99% of coherent strings by discarding 43% of dp strings (cutoff is -96.82570099830627) ---------------------------- My only problem (s) have to do with ==> coherent: 95.82% My only problem (s) to cheer them personally ==> dp: 99.89% (in small text) crazy utahraptor ==> coherent: 60.62% (in small text) ". ==> dp: 58.77% Well, I've made up my own joke to get him today. All I need to do is " ==> coherent: 52.12% Well, I've made up my own joke to get him today. All I need a father and gentlemen ==> dp: 99.77% I will be immortalized by kicking an evil ==> coherent: 50.01% I will be immortalized by kicking other punches ==> dp: 58.54% Aw shoot, I was supposed to ==> coherent: 92.77% Aw shoot, I was APOCALYPSE PORN ==> dp: 79.21% Get it? Because CRIME DOESN'T PAY!! Listen, my story has ==> dp: 59.16% Get it? Because CRIME DOESN'T PAY!! Listen, transcriptions of it ==> dp: 80.30% Utahraptor!! DON'T LISTEN TO ==> coherent: 90.59% Utahraptor! This is sort of ==> dp: 88.15% Doesn't exist in my mouth, that is!! Because it's too big ==> coherent: 95.82% Doesn't exist in my mouth, that is!! Because if Superman. ==> dp: 99.44% Now, HERE'S how ==> coherent: 94.16% Now, guys would ==> dp: 73.45% But I am a rock star ==> coherent: 98.57% But I am a guy come ==> dp: 96.79% But I have a solution to make them interesting again: all you need is stories where not ==> dp: 64.66% But I have a solution to make them interesting again: all you need is gonna! Diseases ==> dp: 99.92% At that point, there's a sequence of six nines in a row, and his joke was that he'd like to memorize pi up to that point, so that when reciting he could end with "9,9,9,9,9,9... and so on. " Others ==> dp: 77.18% At that point, there's a sequence of six nines in a row, and his joke was that he'd like to memorize pi up to that point, so that when reciting he could end with "9,9,9,9,9,9... and so it's great he looks ==> dp: 99.98% This is definitely called " T -Rex's Hilarious e joke ", okay ==> coherent: 77.15% This is definitely called " T -Rex's Hilarious e joke AND IN THE ==> dp: 99.15% " Your mouth is full of cockroaches: ==> coherent: 72.25% " Your mouth is full of smooches. ==> coherent: 80.86% Excuse me, sexual congress? Everyone else on the planet is dead, and ==> coherent: 93.15% Excuse me, sexual congress? Everyone else on the planet without syntactic ambiguity! ==> dp: 96.98% Sony is going to write swears on my bathroom ==> dp: 50.96% Sony is going to write their babies need to ==> dp: 99.89% Beginning with the most modest: why am I ==> coherent: 90.87% Beginning with the most modest: why T - ==> dp: 84.62% Is there any greater meaning -to anything ==> dp: 93.76% Is there any greater meaning? When you ==> coherent: 57.11% I've also got steaks AND ==> coherent: 83.50% I've also cold -deterministic ==> dp: 99.30% I had a friend (female) who dated her roommate (also female) ==> coherent: 99.24% I had a friend (female) who dated her roommate, je suis grand ==> dp: 99.30% Yes... TOO BAD INDEED ==> coherent: 70.86% Yes... TOO MANY YEARS ==> coherent: 64.13% ``` ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
HeydarS/flant5_sm_popQA_peft_v61
HeydarS
2024-05-04T11:56:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-04T11:56:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
automerger/LlamaCosmicbun-7B
automerger
2024-05-04T11:54:21Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "automerger", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:mlabonne/OrpoLlama-3-8B", "base_model:merge:mlabonne/OrpoLlama-3-8B", "license:cc-by-nc-4.0", "region:us" ]
null
2024-05-04T11:54:20Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - NousResearch/Meta-Llama-3-8B-Instruct - mlabonne/OrpoLlama-3-8B --- # LlamaCosmicbun-7B LlamaCosmicbun-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) ## 🧩 Configuration ```yaml models: - model: NousResearch/Meta-Llama-3-8B # No parameters necessary for base model - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 0.6 weight: 0.5 - model: mlabonne/OrpoLlama-3-8B parameters: density: 0.55 weight: 0.05 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/LlamaCosmicbun-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
RichardErkhov/wenbopan_-_Faro-Yi-9B-4bits
RichardErkhov
2024-05-04T11:50:52Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2303.08774", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-04T11:44:33Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Faro-Yi-9B - bnb 4bits - Model creator: https://huggingface.co/wenbopan/ - Original model: https://huggingface.co/wenbopan/Faro-Yi-9B/ Original model description: --- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. Just like Yi-9B-200K, Faro-Yi-9B supports up to 200K context length. ## How to Use Faro-Yi-9B uses the chatml template and performs well in both short and long contexts. For longer inputs under **24GB of VRAM**, I recommend to use vLLM to have a max prompt of 32K. Setting `kv_cache_dtype="fp8_e5m2"` allows for 48K input length. 4bit-AWQ quantization on top of that can boost input length to 160K, albeit with some performance impact. Adjust `max_model_len` arg in vLLM or `config.json` to avoid OOM. ```python import io import requests from PyPDF2 import PdfReader from vllm import LLM, SamplingParams llm = LLM(model="wenbopan/Faro-Yi-9B", kv_cache_dtype="fp8_e5m2", max_model_len=100000) pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content) document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?" messages = [ {"role": "user", "content": question} ] # 83K tokens prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False) output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500)) print(output[0].outputs[0].text) # Yi-9B-200K: 175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ... # Faro-Yi-9B: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ... ``` <details> <summary>Or With Transformers</summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-9B', device_map="cuda") tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-9B') messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ... ``` </details> ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> ### Performance on Preference (MT-Bench) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/M0Kc64sIsbNyCCvrRk1Lv.png) ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
mmnga/gemma-7b-it-gguf
mmnga
2024-05-04T11:50:28Z
276
8
null
[ "gguf", "en", "ja", "license:other", "endpoints_compatible", "region:us" ]
null
2024-02-21T18:04:35Z
--- license: other license_name: gemma-terms-of-use license_link: https://www.kaggle.com/models/google/gemma/license/consent language: - en - ja --- # gemma-7b-it-gguf [googleさんが公開しているgemma-7b-it](https://huggingface.co/google/gemma-7b-it)のggufフォーマット変換版です。 **現在量子化された出力が不安定な問題があるらしくQ8_0を推奨します。** [gemma : token_embd.weight テンソルに Q8_0 を使用します #5650](https://github.com/ggerganov/llama.cpp/pull/5650) # Licence [gemma-terms-of-use 利用規約](https://ai.google.dev/gemma/terms)をご利用前に必ずご確認ください。 ## 他のモデル [mmnga/codegemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/codegemma-1.1-7b-it-gguf) [mmnga/codegemma-1.1-2b-gguf](https://huggingface.co/mmnga/codegemma-1.1-2b-gguf) [mmnga/gemma-2b-it-gguf](https://huggingface.co/mmnga/gemma-2b-it-gguf) [mmnga/gemma-7b-it-gguf](https://huggingface.co/mmnga/gemma-7b-it-gguf) [mmnga/gemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/gemma-1.1-7b-it-gguf) [mmnga/codegemma-7b-it-gguf](https://huggingface.co/mmnga/codegemma-7b-it-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'gemma-7b-it-q4_0.gguf' -p "<start_of_turn>user\n日本の文化を10個教えて。<end_of_turn>\n<start_of_turn>model\n" -n 128 ```
mmnga/gemma-2b-it-gguf
mmnga
2024-05-04T11:50:16Z
276
3
null
[ "gguf", "en", "ja", "license:other", "endpoints_compatible", "region:us" ]
null
2024-02-22T14:42:04Z
--- license: other license_name: gemma-terms-of-use license_link: https://www.kaggle.com/models/google/gemma/license/consent language: - en - ja --- # gemma-2b-it-gguf [googleさんが公開しているgemma-2b-it](https://huggingface.co/google/gemma-2b-it)のggufフォーマット変換版です。 # Licence [gemma-terms-of-use 利用規約](https://ai.google.dev/gemma/terms)をご利用前に必ずご確認ください。 ## 他のモデル [mmnga/codegemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/codegemma-1.1-7b-it-gguf) [mmnga/codegemma-1.1-2b-gguf](https://huggingface.co/mmnga/codegemma-1.1-2b-gguf) [mmnga/gemma-2b-it-gguf](https://huggingface.co/mmnga/gemma-2b-it-gguf) [mmnga/gemma-7b-it-gguf](https://huggingface.co/mmnga/gemma-7b-it-gguf) [mmnga/gemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/gemma-1.1-7b-it-gguf) [mmnga/codegemma-7b-it-gguf](https://huggingface.co/mmnga/codegemma-7b-it-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'gemma-2b-it-q4_0.gguf' -p "<start_of_turn>user\n日本の文化を10個教えて。<end_of_turn>\n<start_of_turn>model\n" -n 128 ```
mmnga/codegemma-1.1-2b-gguf
mmnga
2024-05-04T11:49:53Z
986
2
null
[ "gguf", "en", "ja", "dataset:kunishou/amenokaku-code-instruct", "dataset:TokenBender/code_instructions_122k_alpaca_style", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-05-04T09:56:24Z
--- license: gemma language: - en - ja datasets: - kunishou/amenokaku-code-instruct - TokenBender/code_instructions_122k_alpaca_style --- # codegemma-1.1-2b-gguf [googleさんが公開しているcodegemma-1.1-2b](https://huggingface.co/google/codegemma-1.1-2b)のggufフォーマット変換版です。 imatrixのデータは [kunishou/amenokaku-code-instruct](https://huggingface.co/datasets/kunishou/amenokaku-code-instruct) [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style) から1000ずつサンプリングしたデータを使用して作成しました。 ## 他のモデル [mmnga/codegemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/codegemma-1.1-7b-it-gguf) [mmnga/codegemma-1.1-2b-gguf](https://huggingface.co/mmnga/codegemma-1.1-2b-gguf) [mmnga/gemma-2b-it-gguf](https://huggingface.co/mmnga/gemma-2b-it-gguf) [mmnga/gemma-7b-it-gguf](https://huggingface.co/mmnga/gemma-7b-it-gguf) [mmnga/gemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/gemma-1.1-7b-it-gguf) [mmnga/codegemma-7b-it-gguf](https://huggingface.co/mmnga/codegemma-7b-it-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'codegemma-1.1-2b-Q4_0.gguf' -n 128 -p 'Write Hello World.' ```
blockblockblock/Llama3-OpenBioLLM-70B-bpw4-exl2
blockblockblock
2024-05-04T11:49:39Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "conversational", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-70B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-04T11:45:59Z
--- base_model: meta-llama/Meta-Llama-3-70B-Instruct tags: - llama-3 - llama - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - distillation model-index: - name: OpenBioLLM-70B results: [] license: llama3 language: - en widget: - example_title: OpenBioLLM-70B messages: - role: system content: >- You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. - role: user content: How long does it take for newborn jaundice to go away? output: text: >- Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment. The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines: 1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment. It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance. --- <div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/fJIOPJnY6Ff6fUiSIuMEt.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 70 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-70B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 8 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
OwOpeepeepoopoo/herewegoagain22
OwOpeepeepoopoo
2024-05-04T11:33:32Z
92
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T09:25:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vicgalle/test-merge
vicgalle
2024-05-04T11:32:53Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "prometheus-eval/prometheus-7b-v2.0", "teknium/OpenHermes-2.5-Mistral-7B", "conversational", "base_model:prometheus-eval/prometheus-7b-v2.0", "base_model:merge:prometheus-eval/prometheus-7b-v2.0", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:merge:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T08:05:29Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - prometheus-eval/prometheus-7b-v2.0 - teknium/OpenHermes-2.5-Mistral-7B base_model: - prometheus-eval/prometheus-7b-v2.0 - teknium/OpenHermes-2.5-Mistral-7B model-index: - name: test-merge results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.38 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.25 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge name: Open LLM Leaderboard --- # test-merge test-merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml models: - model: prometheus-eval/prometheus-7b-v2.0 parameters: weight: 1.0 - model: teknium/OpenHermes-2.5-Mistral-7B parameters: weight: 1.0 merge_method: linear dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "vicgalle/test-merge" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__test-merge) | Metric |Value| |---------------------------------|----:| |Avg. |63.99| |AI2 Reasoning Challenge (25-Shot)|60.58| |HellaSwag (10-Shot) |82.29| |MMLU (5-Shot) |59.38| |TruthfulQA (0-shot) |56.25| |Winogrande (5-shot) |76.40| |GSM8k (5-shot) |49.05|
Chung-Fan/longformer-elife
Chung-Fan
2024-05-04T11:31:06Z
91
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "generated_from_trainer", "base_model:hyesunyun/update-summarization-bart-large-longformer", "base_model:finetune:hyesunyun/update-summarization-bart-large-longformer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T06:49:33Z
--- license: apache-2.0 base_model: hyesunyun/update-summarization-bart-large-longformer tags: - generated_from_trainer model-index: - name: longformer-elife results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer-elife This model is a fine-tuned version of [hyesunyun/update-summarization-bart-large-longformer](https://huggingface.co/hyesunyun/update-summarization-bart-large-longformer) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
LoneStriker/Llama-3-70B-Instruct-Gradient-262k-5.0bpw-h6-exl2
LoneStriker
2024-05-04T11:26:45Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "arxiv:2402.08268", "arxiv:2310.05209", "arxiv:2305.14233", "arxiv:2309.00071", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-04T11:08:11Z
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 70B Instruct Gradient 262K Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 70B's context length from 8k to > 262K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 105M tokens for this stage, and 188M tokens total for all stages, which is < 0.002% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/Ueev-bujAWFusU2uEcy_G.png) **Approach:** - [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) as the base - NTK-aware interpolation [4] following scaling laws [2] to set optimal schedule for RoPE theta - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [1] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [5] to scalably and efficiently train on very long contexts on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. We layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [3], following a similar recipe for data augmentation to [1]. **Progressive Training Details:** | | 65K | 262K | |--------------------------|-----------------|-----------------| | Initialize From | Llama-3-70B-Instruct | 65K | | Sequence Length 2^N | 16 | 18 | | RoPE theta | 15,296,098 | 207,112,184 | | Batch Size | 64 | 16 | | Gradient Accumulation Steps | 1 | 1 | | Steps | 20 | 25 | | Total Tokens | 83,886,080 | 104,857,600 | | Learning rate | 0.00002 | 0.00002 | | # GPUs | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall) | 100 | 170 | **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). **Quants:** ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [contact@gradient.ai](mailto:contact@gradient.ai) ## References [1] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [2] Liu, Xiaoran, et al. "Scaling laws of rope-based extrapolation." arXiv preprint arXiv:2310.05209 (2023). [3] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. [4] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [5] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
rwr20/PyramidsTraining5
rwr20
2024-05-04T11:12:59Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-05-04T11:10:38Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: rwr20/PyramidsTraining5 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
SJChaudhuri/swin-tiny-patch4-window7-224-med-device-classification
SJChaudhuri
2024-05-04T11:12:53Z
212
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-04T11:09:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-med-device-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-med-device-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2927 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 1.6093 | 0.5 | | No log | 2.0 | 2 | 1.6063 | 0.5 | | No log | 3.0 | 3 | 1.6063 | 0.5 | | No log | 4.0 | 4 | 1.6100 | 0.25 | | No log | 5.0 | 5 | 1.6100 | 0.25 | | No log | 6.0 | 6 | 1.6170 | 0.25 | | No log | 7.0 | 7 | 1.6170 | 0.25 | | No log | 8.0 | 8 | 1.5910 | 0.25 | | No log | 9.0 | 9 | 1.5910 | 0.25 | | 0.7949 | 10.0 | 10 | 1.5705 | 0.25 | | 0.7949 | 11.0 | 11 | 1.5705 | 0.25 | | 0.7949 | 12.0 | 12 | 1.5368 | 0.25 | | 0.7949 | 13.0 | 13 | 1.5368 | 0.25 | | 0.7949 | 14.0 | 14 | 1.4843 | 0.25 | | 0.7949 | 15.0 | 15 | 1.4843 | 0.25 | | 0.7949 | 16.0 | 16 | 1.4413 | 0.25 | | 0.7949 | 17.0 | 17 | 1.4413 | 0.25 | | 0.7949 | 18.0 | 18 | 1.4050 | 0.5 | | 0.7949 | 19.0 | 19 | 1.4050 | 0.5 | | 0.6509 | 20.0 | 20 | 1.3670 | 0.5 | | 0.6509 | 21.0 | 21 | 1.3670 | 0.5 | | 0.6509 | 22.0 | 22 | 1.3404 | 0.5 | | 0.6509 | 23.0 | 23 | 1.3404 | 0.5 | | 0.6509 | 24.0 | 24 | 1.3212 | 0.5 | | 0.6509 | 25.0 | 25 | 1.3212 | 0.5 | | 0.6509 | 26.0 | 26 | 1.3087 | 0.5 | | 0.6509 | 27.0 | 27 | 1.3087 | 0.5 | | 0.6509 | 28.0 | 28 | 1.2969 | 0.75 | | 0.6509 | 29.0 | 29 | 1.2969 | 0.75 | | 0.5774 | 30.0 | 30 | 1.2927 | 0.75 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
LoneStriker/Llama-3-70B-Instruct-Gradient-262k-4.65bpw-h6-exl2
LoneStriker
2024-05-04T11:08:07Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "arxiv:2402.08268", "arxiv:2310.05209", "arxiv:2305.14233", "arxiv:2309.00071", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-04T10:50:51Z
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 70B Instruct Gradient 262K Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 70B's context length from 8k to > 262K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 105M tokens for this stage, and 188M tokens total for all stages, which is < 0.002% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/Ueev-bujAWFusU2uEcy_G.png) **Approach:** - [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) as the base - NTK-aware interpolation [4] following scaling laws [2] to set optimal schedule for RoPE theta - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [1] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [5] to scalably and efficiently train on very long contexts on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. We layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [3], following a similar recipe for data augmentation to [1]. **Progressive Training Details:** | | 65K | 262K | |--------------------------|-----------------|-----------------| | Initialize From | Llama-3-70B-Instruct | 65K | | Sequence Length 2^N | 16 | 18 | | RoPE theta | 15,296,098 | 207,112,184 | | Batch Size | 64 | 16 | | Gradient Accumulation Steps | 1 | 1 | | Steps | 20 | 25 | | Total Tokens | 83,886,080 | 104,857,600 | | Learning rate | 0.00002 | 0.00002 | | # GPUs | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall) | 100 | 170 | **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). **Quants:** ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [contact@gradient.ai](mailto:contact@gradient.ai) ## References [1] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [2] Liu, Xiaoran, et al. "Scaling laws of rope-based extrapolation." arXiv preprint arXiv:2310.05209 (2023). [3] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. [4] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [5] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
miansumairjaved/ppo-PyramidsTraining
miansumairjaved
2024-05-04T11:04:01Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-05-04T11:03:59Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: miansumairjaved/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
RichardErkhov/codellama_-_CodeLlama-34b-hf-4bits
RichardErkhov
2024-05-04T11:03:14Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2308.12950", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-04T10:43:13Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CodeLlama-34b-hf - bnb 4bits - Model creator: https://huggingface.co/codellama/ - Original model: https://huggingface.co/codellama/CodeLlama-34b-hf/ Original model description: --- language: - code pipeline_tag: text-generation tags: - llama-2 license: llama2 --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. > [!NOTE] > This is a non-official Code Llama repo. You can find the official Meta repository in the [Meta Llama organization](https://huggingface.co/meta-llama/CodeLlama-34b-hf). | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | | 70B | [codellama/CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) | [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) | [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers.git accelerate ``` Model capabilities: - [x] Code completion. - [ ] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-34b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 34B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
Nyandori/speech-to-text
Nyandori
2024-05-04T11:00:42Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-04T10:51:24Z
--- license: apache-2.0 ---
KnutJaegersberg/gpt2-chatbot
KnutJaegersberg
2024-05-04T11:00:11Z
63,733
13
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dataset:KnutJaegersberg/Deita-6k", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-03T06:54:54Z
--- license: apache-2.0 datasets: - KnutJaegersberg/Deita-6k model-index: - name: gpt2-chatbot results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 29.69 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 50.27 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.38 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 56.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/gpt2-chatbot name: Open LLM Leaderboard --- GPT2-XL SFT on Deita dataset to change Sams mind. Supports multi-turn dialogue within the range of its capabilities. Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### User: How do you fine tune a large language model? ### Assistant: ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63732ebbbd81fae2b3aaf3fb/KfFbm5kNUwpAlObBYvx19.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__gpt2-chatbot) | Metric |Value| |---------------------------------|----:| |Avg. |33.91| |AI2 Reasoning Challenge (25-Shot)|29.69| |HellaSwag (10-Shot) |50.27| |MMLU (5-Shot) |26.42| |TruthfulQA (0-shot) |40.38| |Winogrande (5-shot) |56.67| |GSM8k (5-shot) | 0.00|
ArunIcfoss/mal_tam_instruct_gemma_trans1
ArunIcfoss
2024-05-04T10:52:33Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/gemma-2b-it-bnb-4bit", "base_model:adapter:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-05-04T10:43:53Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - unsloth - generated_from_trainer base_model: unsloth/gemma-2b-it-bnb-4bit model-index: - name: mal_tam_instruct_gemma_trans1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mal_tam_instruct_gemma_trans1 This model is a fine-tuned version of [unsloth/gemma-2b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-2b-it-bnb-4bit) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 240 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Undi95/Unholy-8B-DPO-OAS
Undi95
2024-05-04T10:49:03Z
18
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-04T06:01:29Z
This is a TEST It was made with a custom Orthogonal Activation Steering script I shared HERE : https://huggingface.co/posts/Undi95/318385306588047#663609dc1818d469455c0222 (but be ready to put your hands in some fucked up code bro) Step : - First I took Unholy (FT of L3 on Toxic Dataset) - Then I trained 2 epoch of DPO on top, with the SAME dataset (https://wandb.ai/undis95/Uncensored8BDPO/runs/3rg4rz13/workspace?nw=nwuserundis95) - Finally, I used OAS on top, bruteforcing the layer to get the best one (I don't really understand all of this, sorry)
ThuyNT/CS505_COQE_viT5_train_InstructionN4_ASPOL_v1
ThuyNT
2024-05-04T10:42:16Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T02:06:53Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_ASPOL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_ASPOL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
archbold/ppo-LunarLander-v2_unit8
archbold
2024-05-04T10:42:16Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T10:42:10Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -108.90 +/- 34.38 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': '__file__' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'archbold/ppo-LunarLander-v2_unit8' 'batch_size': 512 'minibatch_size': 128} ```
RichardErkhov/Locutusque_-_OpenCerebrum-2.0-7B-8bits
RichardErkhov
2024-05-04T10:40:18Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-04T09:25:59Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenCerebrum-2.0-7B - bnb 8bits - Model creator: https://huggingface.co/Locutusque/ - Original model: https://huggingface.co/Locutusque/OpenCerebrum-2.0-7B/ Original model description: --- language: - en license: apache-2.0 tags: - open-source - code - math - chemistry - biology - text-generation - question-answering pipeline_tag: text-generation --- # OpenCerebrum-2.0-7B OpenCerebrum-2.0-7B is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of Aether Research's proprietary Cerebrum model. The model was fine-tuned with SFT and DPO on approximately 7,000 examples across 15 data sources spanning coding, math, science, multi-turn conversation, RAG, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels. ## Model Details - **Base Model:** alpindale/Mistral-7B-v0.2-hf - **Parameters:** 7 billion - **Fine-Tuning Dataset Size:** ~7,000 examples - **Fine-Tuning Data:** Advanced in-house curation techniques at Cognitive Computations, with 15 different data sources for DPO and SFT. - **Language:** English - **License:** Apache 2.0 ## Quants ### EXL2 [@bartowski](https://huggingface.co/bartowski/) - https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2 ### GGUF [@bartowski](https://huggingface.co/bartowski/) - https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-GGUF ## Intended Use OpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities. However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs. ## Limitations and Biases - The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these. - As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models. ## Evaluations | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|------:|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2| 2|none | 0|acc |0.5182|± |0.0152| |ai2_arc |N/A |none | 0|acc |0.7060|± |0.0073| | | |none | 0|acc_norm|0.7049|± |0.0074| | - arc_challenge | 1|none | 0|acc |0.5000|± |0.0146| | | |none | 0|acc_norm|0.5299|± |0.0146| | - arc_easy | 1|none | 0|acc |0.8077|± |0.0081| | | |none | 0|acc_norm|0.7912|± |0.0083| |agieval_nous |N/A |none | 0|acc |0.3778|± |0.0093| | | |none | 0|acc_norm|0.3574|± |0.0093| | - agieval_aqua_rat | 1|none | 0|acc |0.2402|± |0.0269| | | |none | 0|acc_norm|0.2205|± |0.0261| | - agieval_logiqa_en | 1|none | 0|acc |0.3164|± |0.0182| | | |none | 0|acc_norm|0.3656|± |0.0189| | - agieval_lsat_ar | 1|none | 0|acc |0.2130|± |0.0271| | | |none | 0|acc_norm|0.1913|± |0.0260| | - agieval_lsat_lr | 1|none | 0|acc |0.4078|± |0.0218| | | |none | 0|acc_norm|0.3647|± |0.0213| | - agieval_lsat_rc | 1|none | 0|acc |0.4981|± |0.0305| | | |none | 0|acc_norm|0.4498|± |0.0304| | - agieval_sat_en | 1|none | 0|acc |0.6650|± |0.0330| | | |none | 0|acc_norm|0.5922|± |0.0343| | - agieval_sat_en_without_passage| 1|none | 0|acc |0.4612|± |0.0348| | | |none | 0|acc_norm|0.3932|± |0.0341| | - agieval_sat_math | 1|none | 0|acc |0.3273|± |0.0317| | | |none | 0|acc_norm|0.2818|± |0.0304|
ilyesdjerfaf/vit-base-patch16-224-in21k-quickdraw
ilyesdjerfaf
2024-05-04T10:37:23Z
194
2
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-02T21:00:06Z
--- library_name: transformers metrics: - accuracy - precision - recall - f1 pipeline_tag: image-classification --- ## Model Details ### Model Description This model was fine-tuned on a sample of the quickdraw dataset (20 labels with 250 images each) - **Developed by:** [Ilan Aliouchouche](https://github.com/ilanaliouchouche) & [Ilyes Djerfaf](https://github.com/ilyesdjerfaf) - **Finetuned from model:** [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) ### Model Sources - **Repository:** [https://github.com/mlengineershub/QuickDraw-ML](https://github.com/mlengineershub/QuickDraw-ML)
roycosta77/code-search-net-tokenizer1
roycosta77
2024-05-04T10:27:37Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-04T10:27:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Unclad3610/Reinforce-CartPolev1
Unclad3610
2024-05-04T10:25:31Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-04T10:25:20Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPolev1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ThuyNT/CS505_COQE_viT5_train_InstructionN4_SPAOL_v1
ThuyNT
2024-05-04T10:24:40Z
100
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T05:20:37Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_SPAOL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_SPAOL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ThuyNT/CS505_COQE_viT5_train_InstructionN4_POSAL_v1
ThuyNT
2024-05-04T10:24:17Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T03:00:17Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_POSAL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_POSAL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ThuyNT/CS505_COQE_viT5_train_InstructionN4_OASPL_v1
ThuyNT
2024-05-04T10:24:09Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-04T03:21:09Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_train_InstructionN4_OASPL_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN4_OASPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
unography/blip-long-cap-9ep
unography
2024-05-04T10:17:39Z
62
0
transformers
[ "transformers", "safetensors", "blip", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-04T10:05:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
comehope/results
comehope
2024-05-04T10:13:24Z
159
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-04T10:13:16Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan model-index: - name: ft-hubert-on-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-hubert-on-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 100 | 1.9505 | 0.43 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
stefandi/bert-base-uncased-finetuned-swag
stefandi
2024-05-04T10:03:48Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2024-05-04T07:37:36Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0077 - Accuracy: 0.7883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7657 | 1.0 | 4597 | 0.6219 | 0.7555 | | 0.3777 | 2.0 | 9194 | 0.6123 | 0.7835 | | 0.1461 | 3.0 | 13791 | 1.0077 | 0.7883 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
manhdofts03/pokemon_outp
manhdofts03
2024-05-04T10:02:30Z
32
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-04T09:55:02Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - manhdofts03/pokemon_outp These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
RyotaKadoya1993/checkpoint-v1
RyotaKadoya1993
2024-05-04T09:57:00Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:RyotaKadoya1993/Llama-3-JPN-MoE2", "base_model:adapter:RyotaKadoya1993/Llama-3-JPN-MoE2", "region:us" ]
null
2024-05-04T09:48:52Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: RyotaKadoya1993/Llama-3-JPN-MoE2 model-index: - name: checkpoint-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoint-v1 This model is a fine-tuned version of [RyotaKadoya1993/Llama-3-JPN-MoE2](https://huggingface.co/RyotaKadoya1993/Llama-3-JPN-MoE2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1