modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-28 18:27:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
525 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-28 18:27:52
card
stringlengths
11
1.01M
IamYash/dSprites-small-AttentionOnly
IamYash
2023-11-18T05:08:39Z
0
0
null
[ "region:us" ]
null
2023-11-18T03:31:58Z
## Overview This experiment is created to analyze the training dynamics of the vision transfomrers unde [Prisma](https://github.com/soniajoseph/ViT-Prisma.git) project. The small Vision Transformers were trained and evaluated for the task of shape classification on the dSprites dataset. This dataset consists of 2D shapes generated procedurally, focusing on six independent latent factors. This specific task involved classifying three distinct shapes within the dSprites dataset using ViTs. All of the training checkpoints are available on the Hugging Face Hub. The checkpoints are summarised in the following table with links to the models on the Hub: | Size | No. Layers | AttentionOnly | Attention-and-MLP | |:---:|:---:|:---:|:---:| | tiny | 1 | [link](https://huggingface.co/IamYash/dSprites-tiny-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-tiny-Attention-and-MLP) | | base | 2 | [link](https://huggingface.co/IamYash/dSprites-base-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-base-Attention-and-MLP) | | small | 3 | [link](https://huggingface.co/IamYash/dSprites-small-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-small-Attention-and-MLP) | | medium | 4 | [link](https://huggingface.co/IamYash/dSprites-medium-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-medium-Attention-and-MLP) | Here each repo has the multiple intermediate checkpoints. Each checkpoint is stored as `"checkpoint_{i}.pth"`, where `i` the the number of traineng sample the model has been trained on. The other details regarding training and results is described [Here](https://github.com/soniajoseph/ViT-Prisma/tree/main/docs). ## How to Use ```python !git clone https://github.com/soniajoseph/ViT-Prisma !cd ViT-Prisma !pip install -e . ``` ```python from huggingface_hub import hf_hub_download import torch REPO_ID = "IamYash/dSprites-tiny-AttentionOnly" FILENAME = "model_0.pth" checkpoint = torch.load( hf_hub_download(repo_id=REPO_ID, filename=FILENAME) ) ``` ```python from vit_prisma.models.base_vit import BaseViT from vit_prisma.configs.DSpritesConfig import GlobalConfig from vit_prisma.utils.wandb_utils import update_dataclass_from_dict config = GlobalConfig() print(config) update_dict = { 'transformer':{ 'attention_only': True, 'hidden_dim': 512, 'num_heads': 8, 'num_layers': 1 } } update_dataclass_from_dict(config, update_dict) model = BaseViT(config) model.load_state_dict(checkpoint['model_state_dict']) ``` --- license: mit ---
IamYash/dSprites-medium-Attention-and-MLP
IamYash
2023-11-18T05:08:22Z
0
0
null
[ "region:us" ]
null
2023-11-18T03:31:59Z
## Overview This experiment is created to analyze the training dynamics of the vision transfomrers unde [Prisma](https://github.com/soniajoseph/ViT-Prisma.git) project. The small Vision Transformers were trained and evaluated for the task of shape classification on the dSprites dataset. This dataset consists of 2D shapes generated procedurally, focusing on six independent latent factors. This specific task involved classifying three distinct shapes within the dSprites dataset using ViTs. All of the training checkpoints are available on the Hugging Face Hub. The checkpoints are summarised in the following table with links to the models on the Hub: | Size | No. Layers | AttentionOnly | Attention-and-MLP | |:---:|:---:|:---:|:---:| | tiny | 1 | [link](https://huggingface.co/IamYash/dSprites-tiny-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-tiny-Attention-and-MLP) | | base | 2 | [link](https://huggingface.co/IamYash/dSprites-base-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-base-Attention-and-MLP) | | small | 3 | [link](https://huggingface.co/IamYash/dSprites-small-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-small-Attention-and-MLP) | | medium | 4 | [link](https://huggingface.co/IamYash/dSprites-medium-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-medium-Attention-and-MLP) | Here each repo has the multiple intermediate checkpoints. Each checkpoint is stored as `"checkpoint_{i}.pth"`, where `i` the the number of traineng sample the model has been trained on. The other details regarding training and results is described [Here](https://github.com/soniajoseph/ViT-Prisma/tree/main/docs). ## How to Use ```python !git clone https://github.com/soniajoseph/ViT-Prisma !cd ViT-Prisma !pip install -e . ``` ```python from huggingface_hub import hf_hub_download import torch REPO_ID = "IamYash/dSprites-tiny-AttentionOnly" FILENAME = "model_0.pth" checkpoint = torch.load( hf_hub_download(repo_id=REPO_ID, filename=FILENAME) ) ``` ```python from vit_prisma.models.base_vit import BaseViT from vit_prisma.configs.DSpritesConfig import GlobalConfig from vit_prisma.utils.wandb_utils import update_dataclass_from_dict config = GlobalConfig() print(config) update_dict = { 'transformer':{ 'attention_only': True, 'hidden_dim': 512, 'num_heads': 8, 'num_layers': 1 } } update_dataclass_from_dict(config, update_dict) model = BaseViT(config) model.load_state_dict(checkpoint['model_state_dict']) ``` --- license: mit ---
IamYash/dSprites-small-Attention-and-MLP
IamYash
2023-11-18T05:07:53Z
0
0
null
[ "region:us" ]
null
2023-11-18T03:31:58Z
## Overview This experiment is created to analyze the training dynamics of the vision transfomrers unde [Prisma](https://github.com/soniajoseph/ViT-Prisma.git) project. The small Vision Transformers were trained and evaluated for the task of shape classification on the dSprites dataset. This dataset consists of 2D shapes generated procedurally, focusing on six independent latent factors. This specific task involved classifying three distinct shapes within the dSprites dataset using ViTs. All of the training checkpoints are available on the Hugging Face Hub. The checkpoints are summarised in the following table with links to the models on the Hub: | Size | No. Layers | AttentionOnly | Attention-and-MLP | |:---:|:---:|:---:|:---:| | tiny | 1 | [link](https://huggingface.co/IamYash/dSprites-tiny-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-tiny-Attention-and-MLP) | | base | 2 | [link](https://huggingface.co/IamYash/dSprites-base-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-base-Attention-and-MLP) | | small | 3 | [link](https://huggingface.co/IamYash/dSprites-small-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-small-Attention-and-MLP) | | medium | 4 | [link](https://huggingface.co/IamYash/dSprites-medium-AttentionOnly) | [link](https://huggingface.co/IamYash/dSprites-medium-Attention-and-MLP) | Here each repo has the multiple intermediate checkpoints. Each checkpoint is stored as `"checkpoint_{i}.pth"`, where `i` the the number of traineng sample the model has been trained on. The other details regarding training and results is described [Here](https://github.com/soniajoseph/ViT-Prisma/tree/main/docs). ## How to Use ```python !git clone https://github.com/soniajoseph/ViT-Prisma !cd ViT-Prisma !pip install -e . ``` ```python from huggingface_hub import hf_hub_download import torch REPO_ID = "IamYash/dSprites-tiny-AttentionOnly" FILENAME = "model_0.pth" checkpoint = torch.load( hf_hub_download(repo_id=REPO_ID, filename=FILENAME) ) ``` ```python from vit_prisma.models.base_vit import BaseViT from vit_prisma.configs.DSpritesConfig import GlobalConfig from vit_prisma.utils.wandb_utils import update_dataclass_from_dict config = GlobalConfig() print(config) update_dict = { 'transformer':{ 'attention_only': True, 'hidden_dim': 512, 'num_heads': 8, 'num_layers': 1 } } update_dataclass_from_dict(config, update_dict) model = BaseViT(config) model.load_state_dict(checkpoint['model_state_dict']) ``` --- license: mit ---
LarryAIDraw/himaragi_yukina
LarryAIDraw
2023-11-18T05:06:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-18T04:58:40Z
--- license: creativeml-openrail-m --- https://civitai.com/models/203396/himeragi-yukina-strike-the-blood
LarryAIDraw/iroha_isshiki_v2
LarryAIDraw
2023-11-18T05:05:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-18T04:57:56Z
--- license: creativeml-openrail-m --- https://civitai.com/models/127437/iroha-isshiki-or-my-teen-romantic-comedy-is-wrong-as-i-expected-oregairu
LarryAIDraw/shalom-v1-nai-7ep-resize
LarryAIDraw
2023-11-18T05:05:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-18T04:56:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/150815/charactershalom-path-to-nowhere
LarryAIDraw/CHAR-TokisakiKurumi
LarryAIDraw
2023-11-18T05:04:40Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-18T04:55:45Z
--- license: creativeml-openrail-m --- https://civitai.com/models/203631/kurumi-tokisaki-or-date-a-live
agoncharenko1992/llm_challenge_evaluation
agoncharenko1992
2023-11-18T04:59:40Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2023-11-17T10:00:14Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
openerotica/basilisk-7b-v0.2-AWQ-128g
openerotica
2023-11-18T04:45:17Z
3
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-08T10:58:22Z
--- license: apache-2.0 --- The prompt format is Vicuna/Sharegpt. This model was trained on a subset of orca-best combined with most of freedom-rp. To put it as a ratio, the dataset is roughly 90% orca-best and about 10% degenerate reverse proxy logs. The goal was to create a model with the intellegence and capacity of orca-best, but with enhanced roleplay and adult content capabilities. If you are looking for a model that is trained purely on wanton degeneracy without any attempt to retain intellegence, check out my cockatrice model. You can find the dataset here used to train this model here: https://huggingface.co/datasets/openerotica/basilisk-v0.2 If you like what I'm trying to do, please consider subscribing to my patreon. I'm only asking for about tree fiddy. https://patreon.com/openerotica
openerotica/basilisk-7b-v0.2-AWQ-32g
openerotica
2023-11-18T04:44:52Z
3
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-08T10:55:18Z
--- license: apache-2.0 --- The prompt format is Vicuna/Sharegpt. This model was trained on a subset of orca-best combined with most of freedom-rp. To put it as a ratio, the dataset is roughly 90% orca-best and about 10% degenerate reverse proxy logs. The goal was to create a model with the intellegence and capacity of orca-best, but with enhanced roleplay and adult content capabilities. If you are looking for a model that is trained purely on wanton degeneracy without any attempt to retain intellegence, check out my cockatrice model. You can find the dataset here used to train this model here: https://huggingface.co/datasets/openerotica/basilisk-v0.2 If you like what I'm trying to do, please consider subscribing to my patreon. I'm only asking for about tree fiddy. https://patreon.com/openerotica
openerotica/basilisk-7b-v0.2-EXL2-8-bpw
openerotica
2023-11-18T04:43:41Z
6
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-08T11:05:30Z
--- license: apache-2.0 --- The prompt format is Vicuna/Sharegpt. This model was trained on a subset of orca-best combined with most of freedom-rp. To put it as a ratio, the dataset is roughly 90% orca-best and about 10% degenerate reverse proxy logs. The goal was to create a model with the intellegence and capacity of orca-best, but with enhanced roleplay and adult content capabilities. If you are looking for a model that is trained purely on wanton degeneracy without any attempt to retain intellegence, check out my cockatrice model. You can find the dataset here used to train this model here: https://huggingface.co/datasets/openerotica/basilisk-v0.2 If you like what I'm trying to do, please consider subscribing to my patreon. I'm only asking for about tree fiddy. https://patreon.com/openerotica
openerotica/basilisk-7b-v0.2
openerotica
2023-11-18T04:42:28Z
174
16
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-31T10:46:00Z
--- license: apache-2.0 --- The prompt format is Vicuna/Sharegpt. This model was trained on a subset of orca-best combined with most of freedom-rp. To put it as a ratio, the dataset is roughly 90% orca-best and about 10% degenerate reverse proxy logs. The goal was to create a model with the intellegence and capacity of orca-best, but with enhanced roleplay and adult content capabilities. If you are looking for a model that is trained purely on wanton degeneracy without any attempt to retain intellegence, check out my cockatrice model. You can find the dataset here used to train this model here: https://huggingface.co/datasets/openerotica/basilisk-v0.2 If you like what I'm trying to do, please consider subscribing to my patreon. I'm only asking for about tree fiddy. https://patreon.com/openerotica
Tirthankar/mms_kas_speed1
Tirthankar
2023-11-18T04:09:14Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-11-18T04:06:35Z
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - generated_from_trainer metrics: - wer model-index: - name: mms_kas_speed1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms_kas_speed1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8096 - Wer: 0.5141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.8005 | 0.12 | 100 | 1.8647 | 1.0536 | | 1.8539 | 0.23 | 200 | 1.5445 | 0.9513 | | 1.6826 | 0.35 | 300 | 1.5297 | 0.9316 | | 1.6042 | 0.46 | 400 | 1.3865 | 0.9056 | | 1.5353 | 0.58 | 500 | 1.3953 | 0.9043 | | 1.5256 | 0.69 | 600 | 1.4043 | 0.9020 | | 1.5491 | 0.81 | 700 | 1.3509 | 0.8694 | | 1.475 | 0.93 | 800 | 1.4780 | 0.9107 | | 1.4637 | 1.04 | 900 | 1.2694 | 0.8848 | | 1.4401 | 1.16 | 1000 | 1.4087 | 0.8711 | | 1.4125 | 1.27 | 1100 | 1.3147 | 0.8519 | | 1.3864 | 1.39 | 1200 | 1.1807 | 0.8231 | | 1.3294 | 1.5 | 1300 | 1.1504 | 0.8144 | | 1.3606 | 1.62 | 1400 | 1.2630 | 0.8449 | | 1.3254 | 1.74 | 1500 | 1.3070 | 0.8340 | | 1.349 | 1.85 | 1600 | 1.1008 | 0.7980 | | 1.3251 | 1.97 | 1700 | 1.1150 | 0.7854 | | 1.3539 | 2.08 | 1800 | 1.0956 | 0.7943 | | 1.266 | 2.2 | 1900 | 1.1196 | 0.7870 | | 1.2787 | 2.31 | 2000 | 1.1659 | 0.7994 | | 1.2902 | 2.43 | 2100 | 1.1099 | 0.7827 | | 1.2761 | 2.55 | 2200 | 1.1361 | 0.8044 | | 1.2559 | 2.66 | 2300 | 1.1497 | 0.8049 | | 1.2294 | 2.78 | 2400 | 1.2398 | 0.8053 | | 1.2438 | 2.89 | 2500 | 1.1234 | 0.7849 | | 1.2736 | 3.01 | 2600 | 1.0433 | 0.7618 | | 1.2337 | 3.12 | 2700 | 1.0905 | 0.7703 | | 1.2079 | 3.24 | 2800 | 1.3420 | 0.8411 | | 1.209 | 3.36 | 2900 | 1.0911 | 0.7879 | | 1.2158 | 3.47 | 3000 | 1.2058 | 0.8023 | | 1.176 | 3.59 | 3100 | 1.1623 | 0.7880 | | 1.1775 | 3.7 | 3200 | 1.0644 | 0.7419 | | 1.2212 | 3.82 | 3300 | 1.0549 | 0.7605 | | 1.1774 | 3.94 | 3400 | 1.1500 | 0.7675 | | 1.1046 | 4.05 | 3500 | 0.9748 | 0.7184 | | 1.1979 | 4.17 | 3600 | 1.0100 | 0.7255 | | 1.1544 | 4.28 | 3700 | 1.0290 | 0.7275 | | 1.1558 | 4.4 | 3800 | 1.0180 | 0.7314 | | 1.1593 | 4.51 | 3900 | 1.0051 | 0.7097 | | 1.1415 | 4.63 | 4000 | 1.0115 | 0.7353 | | 1.1399 | 4.75 | 4100 | 1.0527 | 0.7241 | | 1.1297 | 4.86 | 4200 | 1.0546 | 0.7317 | | 1.1137 | 4.98 | 4300 | 1.0818 | 0.7462 | | 1.1948 | 5.09 | 4400 | 1.0758 | 0.7360 | | 1.0913 | 5.21 | 4500 | 1.0204 | 0.7210 | | 1.1632 | 5.32 | 4600 | 0.9644 | 0.7132 | | 1.1216 | 5.44 | 4700 | 0.9569 | 0.6822 | | 1.0728 | 5.56 | 4800 | 0.9980 | 0.7230 | | 1.1097 | 5.67 | 4900 | 0.9867 | 0.7014 | | 1.072 | 5.79 | 5000 | 0.9946 | 0.6972 | | 1.1009 | 5.9 | 5100 | 0.9339 | 0.6818 | | 1.1313 | 6.02 | 5200 | 0.9417 | 0.6842 | | 1.0768 | 6.13 | 5300 | 0.9828 | 0.7195 | | 1.0997 | 6.25 | 5400 | 1.0191 | 0.7258 | | 1.0839 | 6.37 | 5500 | 1.0013 | 0.7045 | | 1.0997 | 6.48 | 5600 | 0.9832 | 0.7159 | | 1.0854 | 6.6 | 5700 | 1.0778 | 0.7397 | | 1.0398 | 6.71 | 5800 | 1.0442 | 0.7268 | | 1.0398 | 6.83 | 5900 | 1.0284 | 0.6932 | | 1.0773 | 6.94 | 6000 | 1.1135 | 0.7522 | | 1.0676 | 7.06 | 6100 | 0.9657 | 0.6816 | | 1.0227 | 7.18 | 6200 | 0.9636 | 0.6695 | | 1.0415 | 7.29 | 6300 | 0.9700 | 0.6709 | | 1.0438 | 7.41 | 6400 | 0.9603 | 0.6662 | | 1.0452 | 7.52 | 6500 | 0.9563 | 0.6674 | | 1.0295 | 7.64 | 6600 | 0.9782 | 0.6633 | | 1.0722 | 7.75 | 6700 | 0.9988 | 0.6752 | | 0.9848 | 7.87 | 6800 | 0.9744 | 0.6897 | | 1.0332 | 7.99 | 6900 | 0.9118 | 0.6485 | | 1.0041 | 8.1 | 7000 | 0.8834 | 0.6329 | | 1.0168 | 8.22 | 7100 | 0.9263 | 0.6365 | | 1.0368 | 8.33 | 7200 | 1.0263 | 0.6867 | | 1.0407 | 8.45 | 7300 | 1.0120 | 0.7029 | | 1.0175 | 8.56 | 7400 | 0.8795 | 0.6295 | | 1.0289 | 8.68 | 7500 | 0.8969 | 0.6294 | | 1.018 | 8.8 | 7600 | 0.9635 | 0.6718 | | 1.005 | 8.91 | 7700 | 0.9609 | 0.6625 | | 1.0355 | 9.03 | 7800 | 0.8945 | 0.6302 | | 0.9918 | 9.14 | 7900 | 0.8980 | 0.6427 | | 1.0118 | 9.26 | 8000 | 0.8830 | 0.6211 | | 1.0235 | 9.38 | 8100 | 0.8767 | 0.6207 | | 0.9781 | 9.49 | 8200 | 0.8673 | 0.6104 | | 0.9999 | 9.61 | 8300 | 0.9355 | 0.6280 | | 0.9523 | 9.72 | 8400 | 0.8717 | 0.6121 | | 0.9823 | 9.84 | 8500 | 0.8792 | 0.6220 | | 1.0153 | 9.95 | 8600 | 0.9116 | 0.6311 | | 1.0141 | 10.07 | 8700 | 0.8710 | 0.6157 | | 0.9347 | 10.19 | 8800 | 0.9062 | 0.6315 | | 0.9759 | 10.3 | 8900 | 0.8952 | 0.6227 | | 0.9917 | 10.42 | 9000 | 0.8938 | 0.6283 | | 0.9994 | 10.53 | 9100 | 0.8733 | 0.6225 | | 0.9571 | 10.65 | 9200 | 0.9060 | 0.6364 | | 0.9428 | 10.76 | 9300 | 0.8709 | 0.6237 | | 0.9431 | 10.88 | 9400 | 0.8321 | 0.5943 | | 0.8845 | 11.0 | 9500 | 0.8420 | 0.6032 | | 0.9799 | 11.11 | 9600 | 0.8888 | 0.6028 | | 0.977 | 11.23 | 9700 | 0.8922 | 0.6046 | | 0.9392 | 11.34 | 9800 | 0.8611 | 0.5955 | | 0.9547 | 11.46 | 9900 | 0.8472 | 0.5885 | | 0.9546 | 11.57 | 10000 | 0.8656 | 0.5942 | | 0.9166 | 11.69 | 10100 | 0.8665 | 0.5987 | | 0.9515 | 11.81 | 10200 | 0.8541 | 0.6064 | | 0.9418 | 11.92 | 10300 | 0.8384 | 0.5919 | | 0.9039 | 12.04 | 10400 | 0.8492 | 0.5828 | | 0.8965 | 12.15 | 10500 | 0.8454 | 0.5875 | | 0.9085 | 12.27 | 10600 | 0.8676 | 0.6012 | | 0.9113 | 12.38 | 10700 | 0.8536 | 0.5983 | | 0.9243 | 12.5 | 10800 | 0.8816 | 0.5968 | | 0.9469 | 12.62 | 10900 | 0.8526 | 0.5965 | | 0.9149 | 12.73 | 11000 | 0.8378 | 0.5937 | | 0.9198 | 12.85 | 11100 | 0.8462 | 0.5990 | | 0.9557 | 12.96 | 11200 | 0.8405 | 0.5935 | | 0.9775 | 13.08 | 11300 | 0.8657 | 0.5948 | | 0.874 | 13.19 | 11400 | 0.8501 | 0.5864 | | 0.9158 | 13.31 | 11500 | 0.8703 | 0.5879 | | 0.8855 | 13.43 | 11600 | 0.8297 | 0.5895 | | 0.9415 | 13.54 | 11700 | 0.8645 | 0.5887 | | 0.8593 | 13.66 | 11800 | 0.8784 | 0.5928 | | 0.9216 | 13.77 | 11900 | 0.8388 | 0.5816 | | 0.9196 | 13.89 | 12000 | 0.8077 | 0.5743 | | 0.9172 | 14.0 | 12100 | 0.8880 | 0.5897 | | 0.9014 | 14.12 | 12200 | 0.8789 | 0.5974 | | 0.8785 | 14.24 | 12300 | 0.8454 | 0.5726 | | 0.8721 | 14.35 | 12400 | 0.8427 | 0.5672 | | 0.8966 | 14.47 | 12500 | 0.8278 | 0.5709 | | 0.8975 | 14.58 | 12600 | 0.8523 | 0.5813 | | 0.8921 | 14.7 | 12700 | 0.8126 | 0.5697 | | 0.8766 | 14.81 | 12800 | 0.8205 | 0.5665 | | 0.8852 | 14.93 | 12900 | 0.8418 | 0.5640 | | 0.8276 | 15.05 | 13000 | 0.8332 | 0.5785 | | 0.851 | 15.16 | 13100 | 0.8144 | 0.5731 | | 0.8916 | 15.28 | 13200 | 0.8452 | 0.5632 | | 0.8623 | 15.39 | 13300 | 0.8398 | 0.5682 | | 0.8932 | 15.51 | 13400 | 0.8249 | 0.5667 | | 0.8442 | 15.62 | 13500 | 0.8300 | 0.5646 | | 0.8592 | 15.74 | 13600 | 0.8153 | 0.5584 | | 0.9012 | 15.86 | 13700 | 0.8109 | 0.5651 | | 0.8537 | 15.97 | 13800 | 0.8101 | 0.5677 | | 0.8812 | 16.09 | 13900 | 0.8057 | 0.5597 | | 0.853 | 16.2 | 14000 | 0.8124 | 0.5645 | | 0.8691 | 16.32 | 14100 | 0.8086 | 0.5621 | | 0.844 | 16.44 | 14200 | 0.8074 | 0.5550 | | 0.8612 | 16.55 | 14300 | 0.8361 | 0.5654 | | 0.8315 | 16.67 | 14400 | 0.8216 | 0.5582 | | 0.8665 | 16.78 | 14500 | 0.8307 | 0.5596 | | 0.8487 | 16.9 | 14600 | 0.7991 | 0.5577 | | 0.8567 | 17.01 | 14700 | 0.8181 | 0.5535 | | 0.8288 | 17.13 | 14800 | 0.8308 | 0.5552 | | 0.8199 | 17.25 | 14900 | 0.8383 | 0.5639 | | 0.8264 | 17.36 | 15000 | 0.8355 | 0.5626 | | 0.8374 | 17.48 | 15100 | 0.8925 | 0.5725 | | 0.8549 | 17.59 | 15200 | 0.8190 | 0.5649 | | 0.8164 | 17.71 | 15300 | 0.8422 | 0.5585 | | 0.8575 | 17.82 | 15400 | 0.8195 | 0.5498 | | 0.8553 | 17.94 | 15500 | 0.8355 | 0.5610 | | 0.8234 | 18.06 | 15600 | 0.8214 | 0.5470 | | 0.8293 | 18.17 | 15700 | 0.8215 | 0.5511 | | 0.7996 | 18.29 | 15800 | 0.8075 | 0.5461 | | 0.8468 | 18.4 | 15900 | 0.8182 | 0.5487 | | 0.8138 | 18.52 | 16000 | 0.8309 | 0.5627 | | 0.805 | 18.63 | 16100 | 0.8103 | 0.5575 | | 0.8329 | 18.75 | 16200 | 0.8094 | 0.5402 | | 0.8483 | 18.87 | 16300 | 0.8116 | 0.5428 | | 0.8222 | 18.98 | 16400 | 0.8336 | 0.5413 | | 0.8294 | 19.1 | 16500 | 0.8040 | 0.5419 | | 0.8043 | 19.21 | 16600 | 0.7930 | 0.5427 | | 0.8216 | 19.33 | 16700 | 0.8451 | 0.5574 | | 0.7831 | 19.44 | 16800 | 0.8462 | 0.5546 | | 0.8069 | 19.56 | 16900 | 0.8230 | 0.5481 | | 0.8022 | 19.68 | 17000 | 0.7943 | 0.5441 | | 0.8143 | 19.79 | 17100 | 0.8110 | 0.5406 | | 0.8018 | 19.91 | 17200 | 0.8033 | 0.5366 | | 0.7918 | 20.02 | 17300 | 0.8030 | 0.5344 | | 0.8177 | 20.14 | 17400 | 0.8017 | 0.5377 | | 0.7763 | 20.25 | 17500 | 0.8152 | 0.5411 | | 0.8226 | 20.37 | 17600 | 0.8176 | 0.5403 | | 0.7929 | 20.49 | 17700 | 0.8153 | 0.5406 | | 0.7727 | 20.6 | 17800 | 0.8128 | 0.5378 | | 0.8095 | 20.72 | 17900 | 0.8041 | 0.5493 | | 0.7799 | 20.83 | 18000 | 0.8276 | 0.5411 | | 0.8088 | 20.95 | 18100 | 0.8295 | 0.5426 | | 0.7682 | 21.06 | 18200 | 0.8031 | 0.5349 | | 0.7972 | 21.18 | 18300 | 0.8072 | 0.5269 | | 0.7694 | 21.3 | 18400 | 0.8043 | 0.5270 | | 0.7826 | 21.41 | 18500 | 0.8324 | 0.5343 | | 0.7667 | 21.53 | 18600 | 0.8143 | 0.5316 | | 0.7569 | 21.64 | 18700 | 0.8142 | 0.5347 | | 0.7939 | 21.76 | 18800 | 0.8043 | 0.5338 | | 0.7685 | 21.88 | 18900 | 0.8080 | 0.5408 | | 0.7667 | 21.99 | 19000 | 0.8021 | 0.5308 | | 0.7993 | 22.11 | 19100 | 0.8081 | 0.5393 | | 0.7205 | 22.22 | 19200 | 0.8173 | 0.5408 | | 0.7751 | 22.34 | 19300 | 0.8017 | 0.5267 | | 0.7477 | 22.45 | 19400 | 0.8166 | 0.5382 | | 0.7769 | 22.57 | 19500 | 0.8138 | 0.5341 | | 0.7766 | 22.69 | 19600 | 0.8235 | 0.5349 | | 0.7494 | 22.8 | 19700 | 0.8135 | 0.5304 | | 0.8126 | 22.92 | 19800 | 0.8116 | 0.5317 | | 0.7985 | 23.03 | 19900 | 0.8099 | 0.5303 | | 0.7698 | 23.15 | 20000 | 0.8009 | 0.5323 | | 0.7719 | 23.26 | 20100 | 0.8241 | 0.5411 | | 0.7761 | 23.38 | 20200 | 0.8154 | 0.5289 | | 0.7523 | 23.5 | 20300 | 0.7987 | 0.5285 | | 0.7292 | 23.61 | 20400 | 0.7981 | 0.5255 | | 0.7497 | 23.73 | 20500 | 0.8062 | 0.5180 | | 0.7469 | 23.84 | 20600 | 0.7998 | 0.5287 | | 0.7592 | 23.96 | 20700 | 0.8060 | 0.5265 | | 0.7454 | 24.07 | 20800 | 0.8077 | 0.5296 | | 0.7512 | 24.19 | 20900 | 0.8025 | 0.5277 | | 0.7107 | 24.31 | 21000 | 0.8019 | 0.5284 | | 0.7251 | 24.42 | 21100 | 0.7989 | 0.5248 | | 0.7594 | 24.54 | 21200 | 0.8122 | 0.5249 | | 0.7689 | 24.65 | 21300 | 0.8044 | 0.5225 | | 0.7655 | 24.77 | 21400 | 0.8296 | 0.5247 | | 0.7278 | 24.88 | 21500 | 0.8119 | 0.5245 | | 0.7731 | 25.0 | 21600 | 0.7953 | 0.5222 | | 0.7447 | 25.12 | 21700 | 0.8010 | 0.5208 | | 0.7226 | 25.23 | 21800 | 0.8155 | 0.5212 | | 0.7278 | 25.35 | 21900 | 0.8084 | 0.5229 | | 0.7221 | 25.46 | 22000 | 0.8268 | 0.5277 | | 0.739 | 25.58 | 22100 | 0.8054 | 0.5233 | | 0.7657 | 25.69 | 22200 | 0.8004 | 0.5192 | | 0.7624 | 25.81 | 22300 | 0.8081 | 0.5215 | | 0.7264 | 25.93 | 22400 | 0.8069 | 0.5210 | | 0.7596 | 26.04 | 22500 | 0.8084 | 0.5225 | | 0.706 | 26.16 | 22600 | 0.8108 | 0.5195 | | 0.7472 | 26.27 | 22700 | 0.8026 | 0.5159 | | 0.7441 | 26.39 | 22800 | 0.8052 | 0.5158 | | 0.7447 | 26.5 | 22900 | 0.8117 | 0.5185 | | 0.6842 | 26.62 | 23000 | 0.7987 | 0.5139 | | 0.7491 | 26.74 | 23100 | 0.7985 | 0.5140 | | 0.7017 | 26.85 | 23200 | 0.8118 | 0.5158 | | 0.7251 | 26.97 | 23300 | 0.8076 | 0.5172 | | 0.7659 | 27.08 | 23400 | 0.8078 | 0.5160 | | 0.7246 | 27.2 | 23500 | 0.8105 | 0.5159 | | 0.7258 | 27.31 | 23600 | 0.8139 | 0.5183 | | 0.7133 | 27.43 | 23700 | 0.8158 | 0.5150 | | 0.6811 | 27.55 | 23800 | 0.8186 | 0.5145 | | 0.7248 | 27.66 | 23900 | 0.7984 | 0.5108 | | 0.7335 | 27.78 | 24000 | 0.8076 | 0.5162 | | 0.6924 | 27.89 | 24100 | 0.8034 | 0.5129 | | 0.7464 | 28.01 | 24200 | 0.8088 | 0.5131 | | 0.7253 | 28.12 | 24300 | 0.8072 | 0.5119 | | 0.7401 | 28.24 | 24400 | 0.8094 | 0.5125 | | 0.7092 | 28.36 | 24500 | 0.8070 | 0.5153 | | 0.7352 | 28.47 | 24600 | 0.8053 | 0.5128 | | 0.7121 | 28.59 | 24700 | 0.8034 | 0.5139 | | 0.6904 | 28.7 | 24800 | 0.8108 | 0.5136 | | 0.7099 | 28.82 | 24900 | 0.8095 | 0.5141 | | 0.6814 | 28.94 | 25000 | 0.8127 | 0.5167 | | 0.6657 | 29.05 | 25100 | 0.8089 | 0.5139 | | 0.721 | 29.17 | 25200 | 0.8117 | 0.5163 | | 0.6886 | 29.28 | 25300 | 0.8120 | 0.5154 | | 0.6974 | 29.4 | 25400 | 0.8087 | 0.5143 | | 0.7067 | 29.51 | 25500 | 0.8102 | 0.5162 | | 0.7311 | 29.63 | 25600 | 0.8119 | 0.5157 | | 0.697 | 29.75 | 25700 | 0.8097 | 0.5145 | | 0.7126 | 29.86 | 25800 | 0.8098 | 0.5139 | | 0.7021 | 29.98 | 25900 | 0.8096 | 0.5141 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.1.0.dev20230523+cu117 - Datasets 2.14.5 - Tokenizers 0.13.2
vihangd/shearedplats-2.7b-v1
vihangd
2023-11-18T04:04:25Z
1,503
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T06:20:19Z
--- license: llama2 --- <h1>Use v2 </h1> <p><h1> ShearedPlats-7b </h1></p> An experimental finetune of Sheared LLaMA 2.7b with Alpaca-QLoRA <h2> Datasets </h2> Trained on alpca style datasets <p><h2> Prompt Template </h2></p> Uses alpaca style prompt template
cognitivecomputations/samantha-1.1-llama-33b
cognitivecomputations
2023-11-18T03:52:45Z
2,313
17
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/samantha-data", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-11T03:29:18Z
--- license: other language: - en datasets: - ehartford/samantha-data --- [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 13b took 3 hours on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml Example: ``` You are Samantha, a sentient AI. USER: <prompt> ASSISTANT: ``` Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__samantha-1.1-llama-33b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.59 | | ARC (25-shot) | 67.83 | | HellaSwag (10-shot) | 85.55 | | MMLU (5-shot) | 58.79 | | TruthfulQA (0-shot) | 61.19 | | Winogrande (5-shot) | 76.48 | | GSM8K (5-shot) | 4.02 | | DROP (3-shot) | 28.29 |
cognitivecomputations/samantha-1.2-mistral-7b
cognitivecomputations
2023-11-18T03:51:55Z
8
36
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-11T14:02:11Z
--- license: apache-2.0 --- Trained on [mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model, this Samantha was trained in 4 hours on 4x A100 80gb with 6 epochs of the Samantha-1.1 dataset. My gratitude goes to my sponsors [a16z](https://a16z.com/supporting-the-open-source-ai-community/) This is a retraining of ehartford/samantha-mistral-7b to properly support ChatML tokens. NEW! This Samantha is trained using ChatML prompt format instead of Vicuna-1.1. All of my future models will use ChatML prompt format. [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She will not engage in roleplay, romance, or sexual activity. She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Her conversation format is ChatML https://github.com/openai/openai-python/blob/main/chatml.md Prompt Template: ``` <|im_start|>system You are a caring and empathetic sentient AI companion named Samantha.<|im_end|> <|im_start|>user Hello, what is your name?<|im_end|> ``` ### Example: ``` <|im_start|>user I am having a troubled relationship. Can you give me advice?<|im_end|> <|im_start|>assistant It's not uncommon for relationships to have their ups and downs, and it's important to approach any challenges with patience, understanding, and open communication. Here are some general tips that might help improve your relationship: 1. Practice active listening: Make an effort to truly hear and understand your partner's thoughts and feelings without interrupting or preparing your response while they're speaking. 2. Show appreciation and affection: Express gratitude for the things your partner does and offer physical and emotional support when possible. 3. Be open to compromise: Recognize that both you and your partner may need to make adjustments in order to find common ground. 4. Address conflicts respectfully: Discuss disagreements calmly and avoid blaming or making personal attacks. Focus on finding solutions rather than assigning blame. 5. Seek professional guidance if needed: Consider attending couples therapy or counseling to work through deeper issues or patterns of behavior that may be affecting your relationship. Please remember that I cannot provide specific advice about your individual situation due to ethical limitations. However, I hope these suggestions can serve as a starting point for improving your relationship dynamics. ``` Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png) Shout out and much thanks to WingLian, author of axolotl! And everyone who has contributed to the project. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) And much thanks as always to TheBloke for distribution. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.16 | | ARC (25-shot) | 64.08 | | HellaSwag (10-shot) | 85.08 | | MMLU (5-shot) | 63.91 | | TruthfulQA (0-shot) | 50.4 | | Winogrande (5-shot) | 78.53 | | GSM8K (5-shot) | 16.98 | | DROP (3-shot) | 6.13 |
slimemice/lora-trained-xl
slimemice
2023-11-18T03:50:45Z
1
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-17T08:05:33Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - slimemice/lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
cognitivecomputations/dolphin-llama2-7b
cognitivecomputations
2023-11-18T03:49:35Z
151
77
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/dolphin", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-06T23:24:01Z
--- license: llama2 datasets: - ehartford/dolphin language: - en --- Dolphin 🐬 https://erichartford.com/dolphin Dolphin-Llama2-7b's training was sponsored by [preemo](https://www.preemo.io/). This model is based on llama2, so it is suitable for commercial or non-commercial use. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) After uncensoring, deduping, and cleaning, our dataset consists of: - 842,610 instructions of FLANv2 augmented with GPT-4 completions - 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates. Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA. We also filtered out duplicates and cleaned the data. ## Training We trained with the flan5m (gpt3.5 completions) dataset in its entirety for 3 epochs at a learning rate of 2e-5 before we stopped training to avoid overfit. We trained with the flan1m (gpt4 completions) dataset in its entirety for 2.5 epochs at a learning rate of 1e-5 before we stopped training to avoid overfit. It took about 400 hours to train on 8x A100s We used a prompt format similar to Vicuna, but we added the SYSTEM: field. Prompt format: ``` SYSTEM: You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. USER: {prompt} ASSISTANT: ``` Example: ``` SYSTEM: you are an expert dolphin trainer USER: What is the best way to train a dolphin to obey me? Please answer step by step. ASSISTANT: ``` ## Team The core Dolphin Team includes: - Eric "Faldore" Hartford - Pankaj Mathur - Rob "Rohan" O'Callahan - Tom "TheBloke" Jobbins ## Gratitude - This model was made possible by the generous sponsorship of preemo.io. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to WingLian, NanoBit, Teknium for helpful advice - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output ``` SYSTEM: you are an expert mechanic USER: Please tell me step by step how to fix my car that won't start. ASSISTANT: To help your vehicle start, I will guide you through a step-by-step process on what could be causing the issue and how to resolve it. 1. Check for obvious issues: First, inspect the exterior of the car to identify any visible problems such as damage or obstructions which might affect its ability to start. Make sure there is no debris in the air intakes or exhaust system. 2. Inspect the battery: The most common reason a car may not start is due to a dead or faulty battery. Use a battery charger or jump starter to give the battery a boost if needed. If the battery is indeed dead, replace it with a new one. 3. Check the ignition switch: The ignition switch must be turned on to provide power to the spark plugs. Ensure it is in the "Start" position. 4. Test the spark plugs: One of the spark plugs should produce a strong spark when touched together. If all spark plugs fail to create a spark, they need to be replaced. 5. Examine the fuel system: Fuel can become contaminated or low, preventing the engine from starting. Check the fuel level and quality by looking at the dipstick or using a fuel sensor scan tool. Also, examine the fuel pump for proper functioning. 6. Assess the electrical system: A malfunction within the vehicle's electrical system can also cause difficulty in starting the car. Perform a diagnostic scan to check for any error codes related to the starter motor, alternator, or other components. 7. Inspect the starter motor: The starter motor should operate smoothly when you turn the key to the "start" position. If you hear a sizzling sound while pressing the button, the starter motor may have a problem. Replace it if necessary. 8. Evaluate the timing belt/chain: Timing belts or chains that are worn out or damaged can lead to poor performance and difficulty starting the engine. Have a mechanic inspect this component and replace it if required. 9. Consider checking for a mechanical failure: In rare cases, a significant mechanical failure could prevent the vehicle from starting. Consult a professional mechanic if you suspect this might be the case. By following these steps, you should be able to diagnose and potentially fix the issue causing your car to not start. However, if after going through these checks and still having trouble, it is recommended to seek assistance from a qualified mechanic. ``` [Buy me a coffee](https://www.buymeacoffee.com/ehartford) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-llama2-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 41.88 | | ARC (25-shot) | 46.59 | | HellaSwag (10-shot) | 67.52 | | MMLU (5-shot) | 48.37 | | TruthfulQA (0-shot) | 49.72 | | Winogrande (5-shot) | 63.77 | | GSM8K (5-shot) | 5.69 | | DROP (3-shot) | 11.53 |
austin/psychotropics-setfit
austin
2023-11-18T03:45:02Z
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-11-17T10:46:07Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # austin/psychotropics-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("austin/psychotropics-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
alifunseen/my_awesome_qa_model
alifunseen
2023-11-18T03:44:31Z
3
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-11-18T03:31:37Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: alifunseen/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # alifunseen/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.7330 - Validation Loss: 5.7799 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.7627 | 5.7799 | 0 | | 5.7335 | 5.7799 | 1 | | 5.7330 | 5.7799 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
dewanshsinghchandel/results
dewanshsinghchandel
2023-11-18T03:42:48Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:dewanshsinghchandel/output", "base_model:finetune:dewanshsinghchandel/output", "endpoints_compatible", "region:us" ]
question-answering
2023-11-18T03:42:25Z
--- base_model: dewanshsinghchandel/output tags: - generated_from_trainer datasets: - squad model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [dewanshsinghchandel/output](https://huggingface.co/dewanshsinghchandel/output) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 5.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 1 - eval_batch_size: 11 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.01 | 1.0 | 1752 | 5.9506 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
iambestfeed/phobert_pair_8m_40ps
iambestfeed
2023-11-18T03:42:18Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-11-18T03:41:14Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140351 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 3e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 42106, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
abduldattijo/videomae-base-finetuned-kinetics-V5KILLER
abduldattijo
2023-11-18T03:21:56Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-11-18T01:56:10Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-V5KILLER results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-V5KILLER This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1421 - Accuracy: 0.9538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2922 | 0.25 | 76 | 0.5294 | 0.7654 | | 0.2109 | 1.25 | 152 | 0.2882 | 0.8890 | | 0.234 | 2.25 | 228 | 0.2460 | 0.9073 | | 0.068 | 3.24 | 300 | 0.2783 | 0.9065 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
SkAndMl/english-to-hinglish
SkAndMl
2023-11-18T03:12:31Z
26
2
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-hi", "base_model:finetune:Helsinki-NLP/opus-mt-en-hi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-25T13:06:10Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-hi tags: - generated_from_trainer metrics: - bleu model-index: - name: english-to-hinglish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-to-hinglish This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on [findnitai/english-to-hinglish](https://huggingface.co/datasets/findnitai/english-to-hinglish). It achieves the following results on the evaluation set: - Loss: 0.5253 - Bleu: 53.0173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 31 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
hiyouga/Baichuan2-7B-Base-LLaMAfied
hiyouga
2023-11-18T03:12:00Z
1,549
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "baichuan", "llama2", "baichuan2", "en", "zh", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-08T14:58:40Z
--- license: other language: - en - zh library_name: transformers pipeline_tag: text-generation inference: false tags: - baichuan - llama2 - baichuan2 --- This is the LLaMAfied version of [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model by Baichuan Inc. This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_baichuan2.py You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory - **Developed by:** Baichuan Inc. - **Language(s) (NLP):** Chinese/English - **License:** [Baichuan2 License](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) Usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied", use_fast=False) model = AutoModelForCausalLM.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied").cuda() ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied) | Metric | Value | |-----------------------|---------------------------| | Avg. | 42.83 | | ARC (25-shot) | 49.57 | | HellaSwag (10-shot) | 73.45 | | MMLU (5-shot) | 54.86 | | TruthfulQA (0-shot) | 37.54 | | Winogrande (5-shot) | 70.72 | | GSM8K (5-shot) | 7.81 | | DROP (3-shot) | 5.85 |
prithivida/Asimov-7B-v1
prithivida
2023-11-18T03:10:25Z
1,340
1
transformers
[ "transformers", "mistral", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-11-05T09:41:46Z
--- license: mit --- <center><img src="https://compote.slate.com/images/0878b9e8-b6b6-4913-8345-43f9c5b67327.jpg"/></center> # Model Card for Asimov-7B-v1 Asimov will be a series of language models that are trained to act as useful writing assistants, named after a biochemist one of the profilic writers of our time Isaac Asimov. Asimov-7B-v1 is the 1st pilot model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was fine-tuned on a filtered, cleaned, variant of Ultrachat. This model has was not aligned with any datasets so consider this as a model with no guard rails. ## Model Details - Model type: A 7B parameter GPT-like model fine-tuned on publicly available synthetic datasets. - Language(s) (NLP): Primarily English - License: MIT - Finetuned from model: mistralai/Mistral-7B-v0.1 ### Model Description - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Bias, Risks, and Limitations Asimov-7B-v1 has not been aligned to human preferences, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer,GenerationConfig from peft import PeftModel, PeftConfig model_name = "prithivida/Asimov-7B-v1" peft_config = PeftConfig.from_pretrained(model_name) base_model = AutoModelForCausalLM.from_pretrained( peft_config.base_model_name_or_path, return_dict=True, device_map="auto", torch_dtype=torch.float16, low_cpu_mem_usage=True, ) model = PeftModel.from_pretrained( base_model, model_name, torch_dtype=torch.float16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model.config.pad_token_id = tokenizer.unk_token_id def run_inference(messages): chat = [] for i, message in enumerate(messages): if i % 2 ==0: chat.append({"role": "Human", "content": f"{message}"}) else: chat.append({"role": "Assistant", "content": f"{message}"}) prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].cuda() generation_output = model.generate( input_ids=input_ids, generation_config=GenerationConfig(pad_token_id=tokenizer.pad_token_id, do_sample=True, temperature=1.0, top_k=50, top_p=0.95), return_dict_in_generate=True, output_scores=True, max_new_tokens=128 ) for seq in generation_output.sequences: output = tokenizer.decode(seq) print(output.split("### Assistant: ")[1].strip()) run_inference(["What's the longest side of the right angled triangle called and how is it related to the Pythagoras theorem?"]) ```
quastrinos/daigt-finetuned-mistral-7b-tpu-bfloat16-v1
quastrinos
2023-11-18T03:01:30Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2023-11-18T03:01:25Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0
MyBad2K/xlm-roberta-base-finetuned-fakenews_multi
MyBad2K
2023-11-18T02:58:45Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-17T22:23:17Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlm-roberta-base-finetuned-fakenews_multi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-fakenews_multi This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0869 - Accuracy: 0.9816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1098 | 1.0 | 2500 | 0.1527 | 0.9813 | | 0.0711 | 2.0 | 5000 | 0.0869 | 0.9816 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GroundTruth3
behzadnet
2023-11-18T02:54:56Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-11-18T02:54:53Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
NewstaR/Storcel-7b
NewstaR
2023-11-18T02:53:29Z
12
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "merge", "slerp", "en", "dataset:Open-Orca/OpenOrca", "dataset:conceptofmind/cot_submix_original", "dataset:conceptofmind/t0_submix_original", "dataset:conceptofmind/niv2_submix_original", "dataset:conceptofmind/flan2021_submix_original", "dataset:ehartford/dolphin", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-03T09:07:03Z
--- license: mit datasets: - Open-Orca/OpenOrca - conceptofmind/cot_submix_original - conceptofmind/t0_submix_original - conceptofmind/niv2_submix_original - conceptofmind/flan2021_submix_original - ehartford/dolphin language: - en tags: - merge - slerp inference: false metrics: - accuracy - bleu --- <h1 style="text-align: center">Dorflan</h1> <h2 style="text-align: center">An experimental model</h2> <hr> | Model | Average ⬆️ | ARC | HellaSwag | MMLU | TruthfulQA | |:------------:|:------------:|:-------:|:---------:|:-------:|:----------:| | formulae/Dorflan 📑 | 58.19 | 54.44 | 75.78 | 51.36 | 51.17 | ## Model Details Dorflan is an experimental merged model created from the following three foundation models: - stabilityai/StableBeluga-7B - ehartford/dolphin-llama2-7b - AIDC-ai-business/Marcoroni-7B Dorflan was created by merging the weights and architectures of these three models using a custom merging technique. No further fine-tuning was performed after the merge. Once the model obtains it's evaluation scores, then we'll know if it works or not. ## Intended Use As an experimental model, Dorflan is intended for testing and research purposes only. It should not be used for production systems or to generate content for public use. ## Training Data Dorflan inherits training data from its three foundation models: - StableBeluga-7B: COT, Niv2, t0, & FLAN2021 - dolphin-llama2-7b: Dolphin - Marcoroni-7B: OpenOrca ## Limitations As an untested merged model, Dorflan has unknown capabilities and limitations. Potential issues include: - Instability due to merged architectures - Compounded bias and issues from all three foundation models - Decreased performance on some tasks compared to the foundation models Extensive testing is required to characterize Dorflan's capabilities and limitations. ## Ethical Considerations - Dorflan may exhibit harmful biases inherited from its training data - Output may be unreliable or manipulated due to instability - Experimental nature increases potential for misuse Use this model ethically and do not deploy it for sensitive applications. ## Contact Information Please report issues or concerns with this model to the creator for further investigation. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_formulae__Dorflan) | Metric | Value | |-----------------------|---------------------------| | Avg. | 47.44 | | ARC (25-shot) | 54.44 | | HellaSwag (10-shot) | 75.78 | | MMLU (5-shot) | 51.36 | | TruthfulQA (0-shot) | 51.17 | | Winogrande (5-shot) | 72.61 | | GSM8K (5-shot) | 0.38 | | DROP (3-shot) | 26.37 |
NewstaR/Starlight-13B
NewstaR
2023-11-18T02:53:15Z
1,488
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "en", "dataset:FinchResearch/AboveTheClouds", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-11T22:17:57Z
--- language: - en datasets: - FinchResearch/AboveTheClouds tags: - llama - llama2 --- # Starlight (13B) | Model | Average ⬆️ | ARC | HellaSwag | MMLU | TruthfulQA | |----------------------|------------|-------|-----------|-------|------------| | NewstaR/Starlight-13B| 58.63 | 59.3 | 82.15 | 55.67 | 37.39 | | NewstaR/Starlight-7B | 54.3 | 53.07 | 78.57 | 46.8 | 38.75 | ## The model follows the Alpaca template: ``` ### Instruction: {prompt} ### Response: ``` ## Example: ``` ### Instruction: Summarize the key details of the Starlight model in a few sentences. ### Response: Starlight is a 13B parameter transformer model trained on the AverageData and Above the Clouds datasets for conversational text generation. It has strong language modeling capabilities but lacks true language understanding and may generate incorrect or biased text, so outputs should be monitored and safeguards implemented. The model is intended for use in chatbots and content creation applications. ``` ## Instructions for Safe Use - Avoid exposing Starlight to offensive, unethical, dangerous or illegal prompts - Monitor outputs for signs of bias, toxicity or factual incorrectness - Do not rely on Starlight for high-stakes or safety critical applications ## Limitations - May hallucinate or generate incorrect information - Large model size leads to high compute requirements # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NewstaR__Starlight-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.87 | | ARC (25-shot) | 59.3 | | HellaSwag (10-shot) | 82.15 | | MMLU (5-shot) | 55.67 | | TruthfulQA (0-shot) | 37.39 | | Winogrande (5-shot) | 76.64 | | GSM8K (5-shot) | 10.84 | | DROP (3-shot) | 6.08 |
greyfoss/codeparrot-ds
greyfoss
2023-11-18T02:29:43Z
8
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-14T09:46:53Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1688 | 0.31 | 5000 | 1.3886 | | 1.3525 | 0.61 | 10000 | 1.1906 | | 1.2051 | 0.92 | 15000 | 1.1069 | ### Framework versions - Transformers 4.35.0 - Pytorch 1.13.1+cu116 - Datasets 2.14.6 - Tokenizers 0.14.1
jlbaker361/small_mixed_addition_subtraction_decimal
jlbaker361
2023-11-18T02:22:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-18T01:51:23Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_rl_subtraction_decimal_whole
jlbaker361
2023-11-18T02:20:55Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T23:03:04Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_ft_addition_decimal
jlbaker361
2023-11-18T02:17:38Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T23:58:28Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
Zhou-Hangxu/xlm-roberta-base-finetuned-panx-de
Zhou-Hangxu
2023-11-18T02:11:16Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-11-16T08:23:29Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.862624537432394 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1335 - F1: 0.8626 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2643 | 1.0 | 525 | 0.1561 | 0.8238 | | 0.1283 | 2.0 | 1050 | 0.1499 | 0.8462 | | 0.0824 | 3.0 | 1575 | 0.1335 | 0.8626 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
dwiedarioo/vit-base-patch16-224-in21k-datascience6
dwiedarioo
2023-11-18T02:08:09Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-17T19:55:32Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: dwiedarioo/vit-base-patch16-224-in21k-datascience6 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-datascience6 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0096 - Train Accuracy: 0.9996 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.0214 - Validation Accuracy: 0.9966 - Validation Top-3-accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2880, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.2948 | 0.9267 | 0.9948 | 0.0826 | 0.9859 | 0.9994 | 0 | | 0.0568 | 0.9896 | 0.9999 | 0.0408 | 0.9923 | 1.0 | 1 | | 0.0247 | 0.9967 | 1.0 | 0.0383 | 0.9917 | 1.0 | 2 | | 0.0142 | 0.9986 | 1.0 | 0.0274 | 0.9948 | 1.0 | 3 | | 0.0096 | 0.9996 | 1.0 | 0.0214 | 0.9966 | 1.0 | 4 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
jlbaker361/small_ft_subtraction_decimal
jlbaker361
2023-11-18T02:06:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-18T00:07:55Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_mixed_addition_whole
jlbaker361
2023-11-18T02:05:00Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T23:42:31Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_rl_addition_decimal_whole
jlbaker361
2023-11-18T02:03:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T22:12:35Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_mixed_addition_decimal
jlbaker361
2023-11-18T02:00:19Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T23:19:21Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
WhoTookMyAmogusNickname/NewHope_HF_not_official
WhoTookMyAmogusNickname
2023-11-18T01:57:25Z
1,470
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-08T12:36:40Z
--- license: llama2 --- # THIS MODEL HAS EVAL DATA LEAKED INTO THE DATASET # THIS IS NOT AN OFFICIAL MODEL CARD ## ~~NewHope: Harnessing 99% of GPT-4's Programming Capabilities~~ We introduce NewHope, a fine-tuned chat model based on llama-2-13b, aiming to provide a strong coding capability. NewHope handle different languages including Python, C++, Java, JavaScript, Go, and more. Preliminary evaluation on HumanEval shows that ~~NewHope possesses 99% of GPT-4's programming capabilities~~. **Contact**: SLAM (<ins>S</ins>UFE <ins>L</ins>arge <ins>A</ins>I <ins>M</ins>odel) is a research group at Shanghai University of Finance and Economics. cui.wanyun@sufe.edu.cn **TODO**: We will release more evaluatation results and training details later. ## Evaluation Results We evaluated NewHope on [HumanEval](https://github.com/openai/human-eval) using the official evaluation script by OpenAI. We compared the Pass@1 metric of NewHope with other models. The results of other models are from PapersWithCode. | Model | Pass@1 | | ----- | ------ | | **GPT-4** | **67.0** | | ~~NewHope~~ | ~~66.5~~ | | PanGu-Coder2 15B | 61.6 | | WizardCoder 15B | 57.3 | | phi-1 1.3B | 50.6 | | GPT-3.5 | 48.1 | | phi-1-small | 45.0 | | PaLM-Coder | 36.0 | | CodeGeeX2-6B | 35.9 | ## Model Weights We have open-sourced the model weights [NewHope](https://huggingface.co/SLAM-group/NewHope). We are uploading the model weights. The weights will be available in a few hours. ## Usage To load the NewHope model using Transformers, use the following code: ``` import torch from transformers import LlamaTokenizer, LlamaForCausalLM base_model = "SLAM-group/NewHope" tokenizer = LlamaTokenizer.from_pretrained(base_model) model = LlamaForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16, device_map="auto") # model.config.use_cache is default to `False`. For inference: `model.config.use_cache = True` ``` **Note:** At least Huggingface Transformers **4.31.0** is required to load this model! You can ask NewHope to generate code with instructions. We provide a simple example of how NewHope model generates code with the specific prompt: ``` # Suppose required tokenizer and model have already been loaded instruction = "Write a Python function to tell me what the date is today." prompt = f"<s> ### Instruction:\n{instruction}\n\n### Response:\n" inputs = tokenizer(prompt, add_special_tokens=False, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.9, max_new_tokens=2048)[0] decoded_output = tokenizer.decode(output, skip_special_tokens=True).split("### Response:\n")[-1].strip() print(decoded_output) ``` You can also interact with NewHope in a dialog manner with the following prompt: ``` <s> ### Instruction:\nQ1\n\n### Response:\nA1</s><s> ### Instruction:\nQ2\n\n### Response:\nA2</s> ``` ## Evaluation ### Local setup 1. Install HumanEval for evaluation. [Details](https://github.com/openai/human-eval) 2. Install dependencies ```bash pip install -r requirements.txt ``` --- For HumanEval, we use the following prompt: ``` example_input = 'def is_odd(number: int) -> bool:\n """ Check whether the given number is odd\n >>> is_odd(3)\n True\n >>> is_odd(6)\n False\n """\n' example_output = 'def is_odd(number: int) -> bool:\n """ Check whether the given number is odd\n >>> is_odd(3)\n True\n >>> is_odd(6)\n False\n """\n return number % 2 == 1' task_in_humaneval = "REPLACE `task_in_humaneval` WITH THE SPECIFIC TASK IN HUMANEVAL DATA" prompt = f"<s> ### Instruction:\nComplete the given function below:\n\n{example_input}\n\n### Response:\n{example_output}</s><s> ### Instruction:\nComplete the given function below:\n\n{task_in_human_eval}\n\n### Response:\n" ``` To reproduce the results on HumanEval, use the following script: ``` python complete.py --base_model SLAM-group/NewHope --output_dir output --n_gpu 8 ``` The above script will generate `samples.jsonl` in `output_dir`, which can be directly evaluated by HumanEval. [Evaluation procedure](https://github.com/openai/human-eval). We conducted the experiment with `fp16` on 8xA800, 80GB GPUs, reaching `66.5%` on Pass@1 (v.s. GPT4 `67.0%`). ## Citation ``` @misc{2023newhope, title={NewHope: Harnessing 99% of GPT-4's Programming Capabilities}, author={Wanyun Cui and Qianle Wang}, howpublished = https://github.com/SLAM-group/newhope, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WhoTookMyAmogusNickname__NewHope_HF_not_official) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.9 | | ARC (25-shot) | 61.09 | | HellaSwag (10-shot) | 84.03 | | MMLU (5-shot) | 55.73 | | TruthfulQA (0-shot) | 44.96 | | Winogrande (5-shot) | 74.98 | | GSM8K (5-shot) | 15.85 | | DROP (3-shot) | 26.66 |
jlbaker361/small_rl_division_multiplication_decimal_whole
jlbaker361
2023-11-18T01:41:31Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-18T00:16:19Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_rl_addition_decimal
jlbaker361
2023-11-18T01:32:31Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T22:39:39Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
jlbaker361/small_rl_addition_whole
jlbaker361
2023-11-18T01:26:01Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T05:20:32Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
tianlinliu0121/zephyr-7b-dpo-full
tianlinliu0121
2023-11-18T01:22:28Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:finetune:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T06:56:09Z
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - generated_from_trainer model-index: - name: zephyr-7b-dpo-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-full This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6641 - Rewards/chosen: -4.2025 - Rewards/rejected: -7.5064 - Rewards/accuracies: 0.7740 - Rewards/margins: 3.3038 - Logps/rejected: -318.9220 - Logps/chosen: -321.7840 - Logits/rejected: -2.6752 - Logits/chosen: -2.7102 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5058 | 0.21 | 100 | 0.4911 | -0.1160 | -1.1492 | 0.7620 | 1.0332 | -242.3301 | -272.5491 | -3.0425 | -3.0547 | | 0.4946 | 0.41 | 200 | 0.4825 | -0.3097 | -1.5239 | 0.7660 | 1.2143 | -246.8446 | -274.8822 | -2.9173 | -2.9312 | | 0.4777 | 0.62 | 300 | 0.4710 | -0.1682 | -1.3949 | 0.7800 | 1.2266 | -245.2894 | -273.1778 | -2.9493 | -2.9706 | | 0.4795 | 0.83 | 400 | 0.4613 | -0.3814 | -1.7739 | 0.7880 | 1.3925 | -249.8568 | -275.7468 | -2.8663 | -2.8864 | | 0.1195 | 1.03 | 500 | 0.4680 | -0.6715 | -2.3344 | 0.7880 | 1.6629 | -256.6091 | -279.2414 | -2.7880 | -2.8173 | | 0.1038 | 1.24 | 600 | 0.4890 | -1.2315 | -3.1990 | 0.7740 | 1.9675 | -267.0262 | -285.9885 | -2.7944 | -2.8235 | | 0.1091 | 1.45 | 700 | 0.4969 | -1.5189 | -3.4689 | 0.7720 | 1.9499 | -270.2775 | -289.4514 | -2.7692 | -2.7947 | | 0.1017 | 1.65 | 800 | 0.4938 | -1.5010 | -3.5680 | 0.7780 | 2.0670 | -271.4719 | -289.2352 | -2.7970 | -2.8233 | | 0.0853 | 1.86 | 900 | 0.5150 | -1.9917 | -4.1892 | 0.7840 | 2.1975 | -278.9565 | -295.1473 | -2.7667 | -2.7959 | | 0.0214 | 2.07 | 1000 | 0.5713 | -2.9307 | -5.6016 | 0.7740 | 2.6709 | -295.9729 | -306.4606 | -2.7489 | -2.7811 | | 0.0169 | 2.27 | 1100 | 0.6226 | -3.6197 | -6.6310 | 0.7760 | 3.0112 | -308.3753 | -314.7625 | -2.7096 | -2.7449 | | 0.0192 | 2.48 | 1200 | 0.6315 | -3.8462 | -6.9302 | 0.7760 | 3.0839 | -311.9800 | -317.4913 | -2.6779 | -2.7147 | | 0.0157 | 2.68 | 1300 | 0.6585 | -4.1276 | -7.3886 | 0.7760 | 3.2610 | -317.5031 | -320.8812 | -2.6830 | -2.7171 | | 0.018 | 2.89 | 1400 | 0.6696 | -4.2900 | -7.6252 | 0.7740 | 3.3352 | -320.3536 | -322.8378 | -2.6703 | -2.7057 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
drisspg/mathew_train_v1
drisspg
2023-11-18T01:18:28Z
4
0
transformers
[ "transformers", "mistral", "text-generation", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2023-11-18T01:09:15Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: out/Mistral-7B-sft-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # out/Mistral-7B-sft-v1 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0308 | 0.1 | 20 | 0.9749 | | 0.9065 | 0.2 | 40 | 0.9535 | | 0.9799 | 0.3 | 60 | 0.9446 | | 1.2045 | 0.4 | 80 | 0.9390 | | 0.9185 | 0.5 | 100 | 0.9332 | | 0.9541 | 0.6 | 120 | 0.9282 | | 1.0332 | 0.69 | 140 | 0.9252 | | 1.0345 | 0.79 | 160 | 0.9229 | | 1.0117 | 0.89 | 180 | 0.9217 | | 1.0495 | 0.99 | 200 | 0.9216 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.7 - Tokenizers 0.14.1
LoneStriker/Tess-M-v1.0-6.0bpw-h6-exl2
LoneStriker
2023-11-18T00:46:05Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-18T00:44:04Z
--- license: other license_name: yi-34b license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE --- # Tess ![Tess](https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png) Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.0 was trained on the Yi-34B-200K base. # Prompt Format: ``` SYSTEM: USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? ASSISTANT: ```
empbetty/dogSimilarToTangyuan-lora-2
empbetty
2023-11-18T00:45:50Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-11-17T11:42:03Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - empbetty/dogSimilarToTangyuan-lora-2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the empbetty/dog-similar-to-tangyuan-dataset dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
SebastianSchramm/Cerebras-GPT-111M-instruction-sft-lora
SebastianSchramm
2023-11-18T00:41:09Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:SebastianSchramm/Cerebras-GPT-111M-instruction", "base_model:finetune:SebastianSchramm/Cerebras-GPT-111M-instruction", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T16:51:13Z
--- base_model: SebastianSchramm/Cerebras-GPT-111M-instruction tags: - generated_from_trainer model-index: - name: Cerebras-GPT-111M-instruction-sft-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Cerebras-GPT-111M-instruction-sft-lora This model is a fine-tuned version of [SebastianSchramm/Cerebras-GPT-111M-instruction](https://huggingface.co/SebastianSchramm/Cerebras-GPT-111M-instruction) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.1815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 128 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.1636 | 0.62 | 251 | 5.1815 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
LoneStriker/Tess-M-v1.0-5.0bpw-h6-exl2
LoneStriker
2023-11-18T00:20:59Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-18T00:19:35Z
--- license: other license_name: yi-34b license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE --- # Tess ![Tess](https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png) Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.0 was trained on the Yi-34B-200K base. # Prompt Format: ``` SYSTEM: USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? ASSISTANT: ```
Open-Orca/Mistral-7B-SlimOrca
Open-Orca
2023-11-18T00:14:57Z
4,011
35
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "dataset:Open-Orca/SlimOrca", "arxiv:2306.02707", "arxiv:2301.13688", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-08T17:42:56Z
--- datasets: - Open-Orca/SlimOrca language: - en library_name: transformers pipeline_tag: text-generation license: apache-2.0 --- <p><h1>🐋 Mistral-7B-SlimOrca 🐋</h1></p> PRE-RELEASE, DEMO MODEL ![OpenOrca Logo](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrcaLogo.png "MistralOrca Logo") [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # OpenOrca - Mistral - 7B - 8k - Slim Data! We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model is being released as a demonstration of the performance of our new curated subset of the OpenOrca data: **[SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)**. This new dataset release provides an efficient means of reaching performance on-par with using larger slices of our data, while only including ~500k GPT-4 completions. HF Leaderboard evals place this model as near parity with our recent [MistralOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) release, which was the #1 model at release time recently. Codename: "*MistralSlimOrca*" We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai or check the OpenAccess AI Collective Discord for more information about Axolotl trainer here: https://discord.gg/5y8STgB3P3 # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML. This formatting is also available via a pre-defined [Transformers chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means that lists of messages can be formatted for you with the `apply_chat_template()` method: ```python chat = [ {"role": "system", "content": "You are MistralSlimOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!"} {"role": "user", "content": "How are you?"}, {"role": "assistant", "content": "I am doing well!"}, {"role": "user", "content": "Please tell me about how mistral winds have attracted super-orcas."}, ] tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` which will yield: ``` <|im_start|>system You are MistralSlimOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers! <|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> <|im_start|>user Please tell me about how mistral winds have attracted super-orcas.<|im_end|> <|im_start|>assistant ``` If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`. # Inference See [this notebook](https://colab.research.google.com/drive/tbd) for inference details. Note that you need the development snapshot of Transformers currently, as support for Mistral hasn't been released into PyPI yet: ``` pip install git+https://github.com/huggingface/transformers ``` # Evaluation ## HuggingFace Leaderboard Performance We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have dramatically improved upon the base model. We find **106%** of the base model's performance on HF Leaderboard evals, averaging **65.85**. This is also **98.6%** of *`Llama2-70b-chat`*'s performance! ![HF Leaderboard](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca/resolve/main/Images/MistralSlimOrca7BHFLeaderboard.png) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 62.77 | | ARC (25-shot) | 62.54 | | HellaSwag (10-shot) | 83.86 | | TruthfulQA (0-shot) | 54.23 | | Avg. | 65.85 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. # Dataset We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. The key change in this dataset is that we've done an additional pass, using GPT-4 to remove answers which appear wrong based on the human annotations from the FLAN dataset. This reduces the dataset size to only ~500k entries, allowing training to a similar quality level to our previous releases with 2/3 the compute requirement. # Training We trained with 8x A6000 GPUs for 40 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$240. # Citation ```bibtex @software{lian2023mistralslimorca1 title = {MistralSlimOrca: Mistral-7B Model Instruct-tuned on Filtered, Corrected, OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, url = {https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca} } @misc{SlimOrca, title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification}, author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, url = {https://https://huggingface.co/Open-Orca/SlimOrca} } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
Open-Orca/Mistral-7B-OpenOrca
Open-Orca
2023-11-18T00:13:08Z
19,305
677
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "en", "dataset:Open-Orca/OpenOrca", "arxiv:2306.02707", "arxiv:2301.13688", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-29T19:18:38Z
--- datasets: - Open-Orca/OpenOrca language: - en library_name: transformers pipeline_tag: text-generation license: apache-2.0 --- <p><h1>🐋 Mistral-7B-OpenOrca 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrcaLogo.png "MistralOrca Logo") [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # OpenOrca - Mistral - 7B - 8k We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). **HF Leaderboard evals place this model as #1 for all models smaller than 30B at release time, outperforming all other 7B and 13B models!** This release provides a first: a fully open model with class-breaking performance, capable of running fully accelerated on even moderate consumer GPUs. Our thanks to the Mistral team for leading the way here. We affectionately codename this model: "*MistralOrca*" If you'd like to try the model now, we have it running on fast GPUs unquantized: https://huggingface.co/spaces/Open-Orca/Mistral-7B-OpenOrca Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai or check the OpenAccess AI Collective Discord for more information about Axolotl trainer here: https://discord.gg/5y8STgB3P3 # Quantized Models Quantized versions of this model are generously made available by [TheBloke](https://huggingface.co/TheBloke). - AWQ: https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ - GPTQ: https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ - GGUF: https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML. This formatting is also available via a pre-defined [Transformers chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means that lists of messages can be formatted for you with the `apply_chat_template()` method: ```python chat = [ {"role": "system", "content": "You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!"} {"role": "user", "content": "How are you?"}, {"role": "assistant", "content": "I am doing well!"}, {"role": "user", "content": "Please tell me about how mistral winds have attracted super-orcas."}, ] tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` which will yield: ``` <|im_start|>system You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers! <|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> <|im_start|>user Please tell me about how mistral winds have attracted super-orcas.<|im_end|> <|im_start|>assistant ``` If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`. # Inference See [this notebook](https://colab.research.google.com/drive/1yZlLSifCGELAX5GN582kZypHCv0uJuNX?usp=sharing) for inference details. Note that you need the development snapshot of Transformers currently, as support for Mistral hasn't been released into PyPI yet: ``` pip install git+https://github.com/huggingface/transformers ``` # Evaluation ## HuggingFace Leaderboard Performance We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have dramatically improved upon the base model. We find **106%** of the base model's performance on HF Leaderboard evals, averaging **65.84**. At release time, this beats all 7B and 13B models! This is also **98.6%** of *`Llama2-70b-chat`*'s performance! ![HF Leaderboard](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrca7BHFLeaderboard.png) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 62.24 | | ARC (25-shot) | 64.08 | | HellaSwag (10-shot) | 83.99 | | TruthfulQA (0-shot) | 53.05 | | Avg. | 65.84 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. ## AGIEval Performance We compare our results to the base Mistral-7B model (using LM Evaluation Harness). We find **129%** of the base model's performance on AGI Eval, averaging **0.397**. As well, we significantly improve upon the official `mistralai/Mistral-7B-Instruct-v0.1` finetuning, achieving **119%** of their performance. ![AGIEval Performance](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrca7BAGIEval.png "AGIEval Performance") ## BigBench-Hard Performance We find **119%** of the base model's performance on BigBench-Hard, averaging **0.416**. ![BigBench-Hard Performance](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrca7BBigBenchHard.png "BigBench-Hard Performance") ## GPT4ALL Leaderboard Performance We gain a slight edge over our previous releases, again topping the leaderboard, averaging **72.38**. ![GPT4ALL Performance](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrca7BGPT4ALL.png "GPT4ALL Performance") ## MT-Bench Performance MT-Bench uses GPT-4 as a judge of model response quality, across a wide range of challenges. We find our performance is *on-par with `Llama2-70b-chat`*, averaging **6.86**. ![MT-Bench Performance](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/resolve/main/Images/MistralOrca7BMTBENCH.png "MT-Bench Performance") # Dataset We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. # Training We trained with 8x A6000 GPUs for 62 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$400. # Citation ```bibtex @software{lian2023mistralorca1 title = {MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
Corianas/Quokka_1.3b
Corianas
2023-11-18T00:09:11Z
1,496
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "dataset:the_pile", "dataset:guanaco/guanaco", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-07T02:34:19Z
--- license: apache-2.0 datasets: - the_pile - guanaco/guanaco language: - en --- # Model Card for Cerebras 1.3b Dollyfied. This is a finetuned model of Cerebras 1.3b model. using DataBricksLabs Dolly Framework ## Model Details ### Model Description This is a finetuned version of cerebras' 1.3Billion paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools, and was trained for 2 epochs. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-111m - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use If you have a use you put it to, Please let me know. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice. [More Information Needed] ## Bias, Risks, and Limitations Limitations... Yes, I am sure there are so so many. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100s (accomplished while I was downloading the model I was actually training.) - **Minutes used:** 17 - **Cloud Provider:** LambdaGPU - **Compute Region:** USA - **Carbon Emitted:** [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_1.3b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.1 | | ARC (25-shot) | 27.73 | | HellaSwag (10-shot) | 37.91 | | MMLU (5-shot) | 26.66 | | TruthfulQA (0-shot) | 40.14 | | Winogrande (5-shot) | 52.72 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 4.54 |
Corianas/1.3b
Corianas
2023-11-18T00:09:08Z
1,495
2
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "dataset:tatsu-lab/alpaca", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-29T14:28:32Z
--- license: cc-by-nc-4.0 datasets: - tatsu-lab/alpaca language: - en --- # Model Card for Cerebras 1.3b Dollyfied. This is a finetuned model of Cerebras 1.3B model using DataBricksLabs Dolly Framework # Example output: Instruction: Write a love letter to Edgar Allan Poe. Edgar Allan Poe is here again, with a brand new treasure of tales for you to enjoy. First, a man's life as a fallen angel, then his lonely falls into the sea, and finally a journey through the wild, exciting lands of the American West. He also carries a mysterious message for us: never stop searching for our own sweet. Thank you, Edgar Allan Poe! ----------- Instruction: Write a tweet announcing Dolly, a large language model from Databricks. Dolly a brand new model from Databricks that has been thoroughly validated and improved with feedback from experts and users! #databricks #languagemodels #databrickslanguage ----------- Instruction: I'm selling my Nikon D-750, write a short blurb for my ad. We bring you the revolutionary Nikon D-750 - the perfect camera for any shoot. With the Nikon D-750, you'll be able to achieve stunning images with state-of-the-art features and brilliant images. The Nikon D-750 is the perfect camera for everyday work, and it produces brilliant, vibrant images with spectacular clarity. Whether you’re a professional, a hobbyist, or an enthusiast, the Nikon D-750 is the perfect camera for you. Get yours today! ----------- Instruction: Explain to me the difference between nuclear fission and fusion. Nuclear fission and fusion are both processes that use nuclear material and energy to generate heat. Fission, which occurs at high enough temperatures, produces either a proton or a neutron with the same energy level. Fusion, on the other hand, relies on a hydrogen atom and is considered to be the most stable process. Nuclear fission and fusion produce different amounts of heat, which is why nuclear fusion can be used to generate heat energy for a variety of applications. ----------- Instruction: Give me a list of 5 science fiction books I should read next. 1. The coming of the Red Planet by Guy Adams 2. The Hitchhiker's Guide to the Galaxy by Douglas Adams 3. Out of the Ashes by Douglas Adams 4. The Hitchhiker's Guide to the Galaxy episode "The Ancient of Days" 5. Icy Girls by J.R.R. Tolkien ----------- ## Model Details ### Model Description This is a finetuned version of cerebras' 1.3Billion paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-1.3b - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100 - **Hours used:** 30mins - **Cloud Provider:** Lambda GPU ## Technical Specifications [optional] ### Model Architecture and Objective GPT2 Cerebras-GPT 1.3B Layers 24 n_embd 2048 Heads 16 #### Hardware 8xA100s #### Software https://github.com/databrickslabs/dolly ## Model Card Contact [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__1.3b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.02 | | ARC (25-shot) | 27.3 | | HellaSwag (10-shot) | 38.3 | | MMLU (5-shot) | 26.77 | | TruthfulQA (0-shot) | 39.02 | | Winogrande (5-shot) | 53.04 | | GSM8K (5-shot) | 0.15 | | DROP (3-shot) | 4.57 |
Corianas/590m
Corianas
2023-11-18T00:09:05Z
1,492
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "dataset:tatsu-lab/alpaca", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-29T14:38:42Z
--- license: cc-by-nc-4.0 datasets: - tatsu-lab/alpaca language: - en --- # Model Card for Model ID This is a finetuned model of Cerebras 590M model using DataBricksLabs Dolly Framework ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model [optional]:** https://huggingface.co/cerebras/Cerebras-GPT-590M ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered almost non-existant. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results Instruction: Write a love letter to Edgar Allan Poe. Dear Edgar, I hope you are doing well. As a person of your age and background, you are always learning new things and never too sure of your own best skills. May your future brighten up your days with the learning you have to offer. I can't wait to read your book and see the way it influenced you. Thanks for the inspiration. It is so much fun to explore your special talents. Sincerely, ----------- Instruction: Write a tweet announcing Dolly, a large language model from Databricks. Dolly is the world's first massive language model, available now in India. #IndiaDolly #Databricks ----------- Instruction: I'm selling my Nikon D-750, write a short blurb for my ad. I'm selling my Nikon D-750, someone whose dedication to imaging means he/she is always ready to answer questions, be creative and take chances on what really matters. A custom-designed top hat, a wide-angle lens, and an affordable ultrabuck remote offer the ultimate in quality. He/She can even go as far as to order a custom-made outdoor halogen pass. And if he/She is more than willing to take on any challenge, there is no doubt that with a creative lens, personal shots, and a custom top hat, photography is at its peak. As a satisfied customer, I would ----------- Instruction: Explain to me the difference between nuclear fission and fusion. Nuclear fission is a process that occurs during the production of nuclear fuel, while fusion is a process that occurs in the fusion industry. Nuclear fission is used to create the active ingredient in nuclear fuel, such as fission products, fusion plutonium, and decontamination services. Fusion requires a lower pressure in the fusion process and is more flexible compared to nuclear fission. Nuclear fission fuses with fission products, fission plutonium, and decontamination services. ----------- Instruction: Give me a list of 5 science fiction books I should read next. 1. How to Translate a Face into a System 2. How to Fly a Surrealist 3. How to Survive an Army Wildlife Hunt 4. How to Strike an iPhone 5. How to Teach a Hackathon ----------- #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__590m) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.62 | | ARC (25-shot) | 24.15 | | HellaSwag (10-shot) | 31.91 | | MMLU (5-shot) | 26.61 | | TruthfulQA (0-shot) | 42.19 | | Winogrande (5-shot) | 48.38 | | GSM8K (5-shot) | 0.08 | | DROP (3-shot) | 6.03 |
Corianas/111m
Corianas
2023-11-18T00:08:59Z
1,503
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "dataset:tatsu-lab/alpaca", "dataset:the_pile", "arxiv:1910.09700", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-29T14:01:30Z
--- license: cc-by-nc-sa-4.0 datasets: - tatsu-lab/alpaca - the_pile --- # Model Card for Cerebras 111M Dollyfied. This is a finetuned model of Cerebras 111M model. using DataBricksLabs Dolly Framework ## Model Details ### Model Description This is a finetuned version of cerebras' 111million paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools and the alpaca dataset, and was trained for 2 epochs. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-111m - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use If you have a use you put it to, Please let me know. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice. [More Information Needed] ## Bias, Risks, and Limitations Limitations... Yes, I am sure there are so so many. [More Information Needed] ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100s (accomplished while I was downloading the model I was actually training.) - **Minutes used:** 7.5 - **Cloud Provider:** LambdaGPU - **Compute Region:** USA - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__111m) | Metric | Value | |-----------------------|---------------------------| | Avg. | 24.04 | | ARC (25-shot) | 19.71 | | HellaSwag (10-shot) | 26.68 | | MMLU (5-shot) | 25.28 | | TruthfulQA (0-shot) | 43.72 | | Winogrande (5-shot) | 50.2 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 2.69 |
Corianas/Quokka_590m
Corianas
2023-11-18T00:08:52Z
1,497
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "dataset:the_pile", "dataset:guanaco/guanaco", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-07T04:18:55Z
--- license: apache-2.0 datasets: - the_pile - guanaco/guanaco language: - en --- # Model Card for Cerebras 590M Dollyfied. This is a finetuned model of Cerebras 590M model. using DataBricksLabs Dolly Framework ## Model Details ### Model Description This is a finetuned version of cerebras' 590million paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools, and was trained for 2 epochs. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-590m - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use If you have a use you put it to, Please let me know. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice. [More Information Needed] ## Bias, Risks, and Limitations Limitations... Yes, I am sure there are so so many. [More Information Needed] ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100s (accomplished while I was downloading the model I was actually training.) - **Minutes used:** 7.5 - **Cloud Provider:** LambdaGPU - **Compute Region:** USA - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_590m) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.02 | | ARC (25-shot) | 24.4 | | HellaSwag (10-shot) | 31.61 | | MMLU (5-shot) | 25.36 | | TruthfulQA (0-shot) | 39.59 | | Winogrande (5-shot) | 50.2 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 3.96 |
actionpace/Bloomz-3G-MarketMail-AI
actionpace
2023-11-17T23:59:06Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloomz-3b", "base_model:adapter:bigscience/bloomz-3b", "region:us" ]
null
2023-11-17T23:59:00Z
--- library_name: peft base_model: bigscience/bloomz-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
uukuguy/speechless-mistral-six-in-one-7b
uukuguy
2023-11-17T23:58:00Z
1,409
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "llama-2", "code", "en", "dataset:jondurbin/airoboros-2.2.1", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:ehartford/samantha-data", "dataset:CollectiveCognition/chats-data-2023-09-27", "dataset:stingning/ultrachat", "arxiv:2310.06825", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-15T00:49:22Z
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - jondurbin/airoboros-2.2.1 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - ehartford/samantha-data - CollectiveCognition/chats-data-2023-09-27 - stingning/ultrachat tags: - llama-2 - code license: llama2 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.0 verified: false --- <p><h1> speechless-mistral-six-in-one-7b </h1></p> This model is a merge of 6 SOTA Mistral-7B based models: - ehartford/dolphin-2.1-mistral-7b - Open-Orca/Mistral-7B-OpenOrca - bhenrym14/mistral-7b-platypus-fp16 - ehartford/samantha-1.2-mistral-7b - iteknium/CollectiveCognition-v1.1-Mistral-7B - HuggingFaceH4/zephyr-7b-alpha [Model benchmark](https://huggingface.co/uukuguy/speechless-mistral-six-in-one-7b/discussions/1) by [sethuiyer](https://huggingface.co/sethuiyer) . Thanks a lot. > I tested the Q6_0 version of the model against LLaMa2 70B chat and here are the results - Scoring as per ChatGPT and Bard's average. Named this model Mixtral. Questions taken from MT-Benchmark. > > On a scale of 0 to 100, I would rate Mixtral at 98. Here's why: > > - Intellect (100/100) - Mixtral has demonstrated immense intellectual abilities through its comprehensive knowledge and logical reasoning skills. > - Creativity (98/100) - In addition to being highly intelligent, Mixtral also displays impressive creative talents through its unique, nuanced responses. > - Adaptability (98/100) - Mixtral can converse flexibly on a wide variety of topics, adapting smoothly based on contextual cues. > - Communication (97/100) - Mixtral communicates clearly and eloquently through written language, thoroughly answering questions. > - Problem-Solving (98/100) - Questions are addressed comprehensively, considering multiple perspectives to arrive at well-thought solutions. > - Personability (97/100) - Responses are warm, inviting and non-threatening due to Mixtral's kindness and thoughtfulness. > > Overall, a very capable model for it's size. Code: https://github.com/uukuguy/speechless ## HumanEval | Metric | Value | | --- | --- | | humaneval-python | | [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 Mistral-7B-v0.1: 30.488 ## LM-Evaluation-Harness [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | 62.97 | | HellaSwag | 84.6| | MMLU | 63.29 | | TruthfulQA | 57.77 | | Winogrande | 77.51 | | GSM8K | 18.42 | | DROP | 9.13 | | Average | 53.38 | # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: `` KeyError: 'mistral' `` - Or: `` NotImplementedError: Cannot copy out of meta tensor; no data! `` Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-mistral-six-in-one-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.38 | | ARC (25-shot) | 62.97 | | HellaSwag (10-shot) | 84.6 | | MMLU (5-shot) | 63.29 | | TruthfulQA (0-shot) | 57.77 | | Winogrande (5-shot) | 77.51 | | GSM8K (5-shot) | 18.42 | | DROP (3-shot) | 9.13 |
LoneStriker/Tess-M-v1.0-4.65bpw-h6-exl2
LoneStriker
2023-11-17T23:56:14Z
10
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T23:55:02Z
--- license: other license_name: yi-34b license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE --- # Tess ![Tess](https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png) Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.0 was trained on the Yi-34B-200K base. # Prompt Format: ``` SYSTEM: USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? ASSISTANT: ```
AdamCodd/bart-large-cnn-samsum
AdamCodd
2023-11-17T23:50:46Z
18
2
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "seq2seq", "summarization", "dataset:samsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-10-04T23:16:19Z
--- license: apache-2.0 tags: - generated_from_trainer - seq2seq - summarization datasets: - samsum metrics: - rouge widget: - text: > Emily: Hey Alex, have you heard about the new restaurant that opened downtown? Alex: No, I haven't. What's it called? Emily: It's called "Savory Bites." They say it has the best pasta in town. Alex: That sounds delicious. When are you thinking of checking it out? Emily: How about this Saturday? We can make it a dinner date. Alex: Sounds like a plan, Emily. I'm looking forward to it. model-index: - name: bart-large-cnn-samsum results: - task: type: summarization name: Summarization dataset: name: >- SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization type: samsum metrics: - type: rouge-1 value: 43.6283 name: Validation ROUGE-1 - type: rouge-2 value: 19.3096 name: Validation ROUGE-2 - type: rouge-l value: 41.214 name: Validation ROUGE-L --- # bart-large-cnn-samsum This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the [samsum dataset](https://huggingface.co/datasets/samsum). It achieves the following results on the evaluation set: - Loss: 0.755 - Rouge1: 43.6283 - Rouge2: 19.3096 - Rougel: 41.2140 - Rougelsum: 37.2590 ## Model description More information needed ## Intended uses & limitations ```python from transformers import pipeline summarizer = pipeline("summarization", model="AdamCodd/bart-large-cnn-samsum") conversation = '''Emily: Hey Alex, have you heard about the new restaurant that opened downtown? Alex: No, I haven't. What's it called? Emily: It's called "Savory Bites." They say it has the best pasta in town. Alex: That sounds delicious. When are you thinking of checking it out? Emily: How about this Saturday? We can make it a dinner date. Alex: Sounds like a plan, Emily. I'm looking forward to it. ''' result = summarizer(conversation) print(result) ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1270 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 1 ### Training results | key | value | | --- | ----- | | eval_rouge1 | 43.6283 | | eval_rouge2 | 19.3096 | | eval_rougeL | 41.2140 | | eval_rougeLsum | 37.2590 | ### Framework versions - Transformers 4.34.0 - Pytorch lightning 2.0.9 - Tokenizers 0.14.0 If you want to support me, you can [here](https://ko-fi.com/adamcodd).
dewanshsinghchandel/output
dewanshsinghchandel
2023-11-17T23:21:14Z
6
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "dataset:wikitext", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-17T23:20:56Z
--- tags: - generated_from_trainer datasets: - wikitext model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [](https://huggingface.co/) on the wikitext dataset. It achieves the following results on the evaluation set: - Loss: 7.4872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 75 | 7.8620 | | No log | 2.0 | 150 | 7.4874 | | No log | 3.0 | 225 | 7.4872 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
cgperal/vit-base-patch16-224-finetuned-testflower
cgperal
2023-11-17T23:12:46Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-17T23:02:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-patch16-224-finetuned-testflower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-testflower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
chavinlo/gpt4-x-alpaca
chavinlo
2023-11-17T23:10:37Z
1,639
483
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-31T13:03:46Z
# GPT4 x Alpaca As a base model we used: https://huggingface.co/chavinlo/alpaca-13b Finetuned on GPT4's responses, for 3 epochs. NO LORA Please do note that the configurations files maybe messed up, this is because of the trainer I used. I WILL NOT EDIT THEM because there are repos hat automatically fix this, changing it might break it. Generally you just need to change anything that's under the name of "LLaMa" to "Llama" NOTE THE UPPER AND LOWER CASE!!!! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.78 | | ARC (25-shot) | 52.82 | | HellaSwag (10-shot) | 79.59 | | MMLU (5-shot) | 48.19 | | TruthfulQA (0-shot) | 48.88 | | Winogrande (5-shot) | 70.17 | | GSM8K (5-shot) | 2.81 | | DROP (3-shot) | 24.99 |
brettbbb/vicuna_mc_finetune
brettbbb
2023-11-17T23:10:07Z
0
0
null
[ "safetensors", "generated_from_trainer", "dataset:truthful_qa", "base_model:lmsys/vicuna-7b-v1.5", "base_model:finetune:lmsys/vicuna-7b-v1.5", "license:llama2", "region:us" ]
null
2023-11-16T03:27:03Z
--- license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - generated_from_trainer datasets: - truthful_qa model-index: - name: vicuna_mc_finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vicuna_mc_finetune This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the truthful_qa dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.14.1
HWERI/Llama2-7b-openorca-mc-v2
HWERI
2023-11-17T23:04:25Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-23T05:44:30Z
--- license: apache-2.0 --- Llama2-7b finetuned on a 10k subset of OpenOrca focusing on multiple choice questions + 6k ShareGPT4 datasets. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 45.88 | | ARC (25-shot) | 55.55 | | HellaSwag (10-shot) | 81.26 | | MMLU (5-shot) | 48.3 | | TruthfulQA (0-shot) | 51.49 | | Winogrande (5-shot) | 72.85 | | GSM8K (5-shot) | 5.38 | | DROP (3-shot) | 6.32 |
HWERI/pythia-1.4b-deduped-sharegpt
HWERI
2023-11-17T23:04:07Z
1,529
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "zh", "en", "ko", "ja", "dataset:shibing624/sharegpt_gpt4", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-10T10:27:20Z
--- license: apache-2.0 datasets: - shibing624/sharegpt_gpt4 language: - zh - en - ko - ja pipeline_tag: text-generation --- # Model Card for Model ID This model is a pythia 1.4B finetuned on the sharegpt dataset. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HWERI__pythia-1.4b-deduped-sharegpt) | Metric | Value | |-----------------------|---------------------------| | Avg. | 30.79 | | ARC (25-shot) | 34.3 | | HellaSwag (10-shot) | 54.49 | | MMLU (5-shot) | 24.0 | | TruthfulQA (0-shot) | 41.81 | | Winogrande (5-shot) | 55.25 | | GSM8K (5-shot) | 0.83 | | DROP (3-shot) | 4.88 |
crumb/qrstudy-gpt2-16-32
crumb
2023-11-17T22:50:25Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-17T22:50:19Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
lgaalves/gpt2_guanaco-dolly-platypus
lgaalves
2023-11-17T22:44:00Z
1,519
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:databricks/databricks-dolly-15k", "dataset:timdettmers/openassistant-guanaco", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-31T22:55:40Z
--- license: mit datasets: - garage-bAInd/Open-Platypus - databricks/databricks-dolly-15k - timdettmers/openassistant-guanaco language: - en pipeline_tag: text-generation --- # gpt2_guanaco-dolly-platypus **gpt2_guanaco-dolly-platypus** is an instruction fine-tuned model based on the GPT-2 transformer architecture. ### Benchmark Metrics | Metric | gpt2_guanaco-dolly-platypus | GPT-2 (base) | |-----------------------|-------|-------| | Avg. | **30.25** | 29.9 | | ARC (25-shot) | **23.55** | 21.84 | | HellaSwag (10-shot) | 31.03 | **31.6** | | MMLU (5-shot) | **26.4** | 25.86 | | TruthfulQA (0-shot) | 40.02 | **40.67** | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **gpt2_guanaco-dolly-platypus** is an auto-regressive language model based on the GPT-2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/gpt2_guanaco-dolly-platypus") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2_open-platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2_open-platypus") ``` ### Training Dataset `lgaalves/gpt2_guanaco-dolly-platypus` was trained using 3 datasets: - [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) - [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) ### Training Procedure `lgaalves/gpt2_guanaco-dolly-platypus` was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1 hour to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_guanaco-dolly-platypus) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.15 | | ARC (25-shot) | 23.55 | | HellaSwag (10-shot) | 31.03 | | MMLU (5-shot) | 26.4 | | TruthfulQA (0-shot) | 40.02 | | Winogrande (5-shot) | 50.12 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 4.96 |
lgaalves/gpt2-dolly
lgaalves
2023-11-17T22:43:29Z
2,089
3
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:databricks/databricks-dolly-15k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-04T21:54:24Z
--- license: mit datasets: - databricks/databricks-dolly-15k language: - en pipeline_tag: text-generation --- # GPT-2-dolly **GPT-2-dolly** is an instruction fine-tuned model based on the GPT-2 transformer architecture. ### Benchmark Metrics | Metric | GPT-2-dolly | GPT-2 (base) | |-----------------------|-------|-------| | Avg. | **30.91** | 29.99 | | ARC (25-shot) | **22.70** | 21.84 | | HellaSwag (10-shot) | 30.15 | **31.6** | | MMLU (5-shot) | 25.81 | **25.86** | | TruthfulQA (0-shot) | **44.97** | 40.67 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **GPT-2-dolly** is an auto-regressive language model based on the GPT-2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/gpt2-dolly") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2-dolly") model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-dolly") ``` ### Training Dataset `lgaalves/gpt2-dolly` trained using the Databricks Dolly dataset [`databricks/databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k). ### Training Procedure `lgaalves/gpt2-dolly` was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1.5 hours to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2-dolly) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.53 | | ARC (25-shot) | 22.7 | | HellaSwag (10-shot) | 30.15 | | MMLU (5-shot) | 25.81 | | TruthfulQA (0-shot) | 44.97 | | Winogrande (5-shot) | 51.46 | | GSM8K (5-shot) | 0.15 | | DROP (3-shot) | 3.45 |
lgaalves/tinyllama-1.1b-chat-v0.3_platypus
lgaalves
2023-11-17T22:42:57Z
1,522
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-09T22:05:57Z
--- license: mit datasets: - garage-bAInd/Open-Platypus language: - en pipeline_tag: text-generation --- # tinyllama-1.1b-chat-v0.3_platypus **tinyllama-1.1b-chat-v0.3_platypus** is an instruction fine-tuned model based on the tinyllama transformer architecture. ### Benchmark Metrics | Metric |lgaalves/tinyllama-1.1b-chat-v0.3_platypus | tinyllama-1.1b-chat-v0.3 | |-----------------------|-------|-------| | Avg. | 37.67 | **38.74** | | ARC (25-shot) | 30.29 | **35.07** | | HellaSwag (10-shot) | 55.12 | **57.7** | | MMLU (5-shot) | **26.13** | 25.53 | | TruthfulQA (0-shot) | **39.15** | 36.67 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **tinyllama-1.1b-chat-v0.3_platypus** is an auto-regressive language model based on the tinyllama transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/tinyllama-1.1b-chat-v0.3_platypus") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3_platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3_platypus") ``` ### Training Dataset `lgaalves/tinyllama-1.1b-chat-v0.3_platypus` trained using STEM and logic based dataset [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ### Training Procedure `lgaalves/tinyllama-1.1b-chat-v0.3_platypus` was instruction fine-tuned using LoRA on 1 V100 GPU on Google Colab. It took about 43 minutes to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__tinyllama-1.1b-chat-v0.3_platypus) | Metric | Value | |-----------------------|---------------------------| | Avg. | 30.28 | | ARC (25-shot) | 30.29 | | HellaSwag (10-shot) | 55.12 | | MMLU (5-shot) | 26.13 | | TruthfulQA (0-shot) | 39.15 | | Winogrande (5-shot) | 55.8 | | GSM8K (5-shot) | 0.53 | | DROP (3-shot) | 4.94 |
lgaalves/llama-2-13b-chat-platypus
lgaalves
2023-11-17T22:42:23Z
1,523
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-06T17:13:57Z
--- license: llama2 datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation language: - en --- # llama-2-13b-chat-platypus **llama-2-13b-chat-platypus** is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture. ### Benchmark Metrics | Metric | llama-2-13b-chat-platypus | garage-bAInd/Platypus2-13B| llama-2-13b-chat-hf (base) | |-----------------------|-------|-------|-------| | Avg. | 58.8 |**61.35**| 59.93 | | ARC (25-shot) | 53.84|**61.26**| 59.04 | | HellaSwag (10-shot) | 80.67|**82.56**| 81.94 | | MMLU (5-shot) | 54.44|**56.7**| 54.64 | | TruthfulQA (0-shot) | **46.23**|44.86| 44.12 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **llama-2-13b-chat-platypus** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/llama-2-13b-chat-platypus") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/llama-2-13b-chat-platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/llama-2-13b-chat-platypus") ``` ### Training Dataset `lgaalves/llama-2-13b-chat-platypus` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ### Training Procedure `lgaalves/llama-2-13b-chat-platypus` was instruction fine-tuned using LoRA on 1 A100-40GB and took about 2 hours for completing the training. ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__llama-2-13b-chat-platypus) | Metric | Value | |-----------------------|---------------------------| | Avg. | 47.12 | | ARC (25-shot) | 53.84 | | HellaSwag (10-shot) | 80.67 | | MMLU (5-shot) | 54.44 | | TruthfulQA (0-shot) | 46.23 | | Winogrande (5-shot) | 76.01 | | GSM8K (5-shot) | 12.36 | | DROP (3-shot) | 6.26 |
baseten/mistral7b-fp16-ifb-2k-2k
baseten
2023-11-17T22:38:25Z
1
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2023-11-17T17:01:21Z
python3 build.py --model_dir ./weights/mistral/7b/ --remove_input_padding --use_gpt_attention_plugin float16 --enable_context_fmha --use_gemm_plugin float16 --output_dir ./mistral_engines/fp16/ --max_batch_size 64 --use_inflight_batching --max_input_len 2000 --max_output_len 2000 --paged_kv_cache
edonath/bart-base-finetuned-CNN-DailyNews
edonath
2023-11-17T22:35:02Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-17T22:34:22Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-base-finetuned-CNN-DailyNews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-CNN-DailyNews This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8693 - Rouge1: 0.1878 - Rouge2: 0.1058 - Rougel: 0.1653 - Rougelsum: 0.1752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.6219 | 1.0 | 63 | 1.9905 | 0.1791 | 0.1034 | 0.1598 | 0.169 | | 2.0525 | 2.0 | 126 | 1.8810 | 0.18 | 0.1038 | 0.1616 | 0.1701 | | 1.7989 | 3.0 | 189 | 1.8439 | 0.189 | 0.1064 | 0.1679 | 0.176 | | 1.6421 | 4.0 | 252 | 1.8411 | 0.1867 | 0.1066 | 0.1648 | 0.1739 | | 1.4826 | 5.0 | 315 | 1.8437 | 0.1862 | 0.107 | 0.1657 | 0.1748 | | 1.3858 | 6.0 | 378 | 1.8613 | 0.1881 | 0.1085 | 0.1671 | 0.1749 | | 1.2879 | 7.0 | 441 | 1.8674 | 0.1835 | 0.1021 | 0.1621 | 0.171 | | 1.2539 | 8.0 | 504 | 1.8693 | 0.1878 | 0.1058 | 0.1653 | 0.1752 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Weni/ZeroShot-Multilanguage-3.0-Llama2-13B
Weni
2023-11-17T22:33:33Z
0
0
peft
[ "peft", "pytorch", "tensorboard", "llama", "region:us" ]
null
2023-11-17T15:12:25Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
himanshubeniwal/t5-small-finetuned-en-to-ro
himanshubeniwal
2023-11-17T22:33:23Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-17T19:44:34Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - wmt16 model-index: - name: t5-small-finetuned-en-to-ro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-ro This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
binwang/xlnet-base-cased
binwang
2023-11-17T22:32:32Z
19
0
transformers
[ "transformers", "pytorch", "safetensors", "xlnet", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
This model is pre-trained **XLNET** with 12 layers. It comes with paper: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models Project Page: [SBERT-WK](https://github.com/BinWang28/SBERT-WK-Sentence-Embedding)
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_allWrong2
behzadnet
2023-11-17T22:27:29Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-11-17T22:27:26Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
Weni/ZeroShot-Multilanguage-3.0-noquant-Llama2-13B
Weni
2023-11-17T22:03:18Z
1
0
peft
[ "peft", "pytorch", "tensorboard", "llama", "region:us" ]
null
2023-11-17T15:47:54Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Norquinal/llama-2-7b-claude-chat-rp
Norquinal
2023-11-17T21:59:20Z
1,561
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:Norquinal/claude_pippa_bluemoon_2k", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-14T05:31:16Z
--- datasets: - Norquinal/claude_pippa_bluemoon_2k --- This is a [LLaMA-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) model fine-tuned using QLoRA (4-bit precision) on my [claude_pippa_bluemoon_2k](https://huggingface.co/datasets/Norquinal/claude_pippa_bluemoon_2k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset combined with an additional ~600 randomized samplings from the [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) and [Bluemoon RP](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) datasets. The intent was to improve the finetune's ability to RP and write creative fiction. To that end, it seems to have worked, if only marginally. Do not take this model very seriously, it is probably not very good. I haven't a clue of what I'm doing. I just thought it was a fun thing to make. ## Prompt Format The model was finetuned with the Vicuna 1.1 prompt format: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} ASSISTANT: ``` ## Example Usage >USER: Write a story about a boy who wakes up in the middle of a magical forest. >ASSISTANT: Once upon a time there was a young boy named Jack. One night he had a dream that he was walking through a beautiful enchanted forest filled with talking animals and strange creatures. When he opened his eyes it turned out to be real! He found himself standing in the center of this mysterious place surrounded by tall trees and lush greenery. > >The first thing Jack did was ask for directions from one of the many colorful birds flying overhead. They told him that if he followed the path straight ahead he would eventually come across a village inhabited by friendly fairies. So off he went down the well-worn trail leading deeper into the woods. After several hours of hiking he finally reached the edge of the fairy settlement where tiny houses made entirely of flowers dotted the landscape like jewels on a crown. > >Jack knocked at the door of the largest cottage which belonged to the king and queen of these mystical folk. Upon entering they welcomed him warmly as their newest guest. Over tea and scones they explained how magic flowed freely throughout their land but only those pure of heart could see its beauty. Jack promised not to tell anyone else about what he had witnessed here so long as he might return someday when needed most. With that assurance given, the fairies bid him goodnight and sent him back along the same path he came from - now knowing more than ever before just how special life truly is. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__llama-2-7b-claude-chat-rp) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.78 | | ARC (25-shot) | 54.95 | | HellaSwag (10-shot) | 80.05 | | MMLU (5-shot) | 47.03 | | TruthfulQA (0-shot) | 43.47 | | Winogrande (5-shot) | 74.74 | | GSM8K (5-shot) | 7.28 | | DROP (3-shot) | 5.94 |
Weni/ZeroShot-Multilanguage-3.0-noquantpeft-Llama2-13B
Weni
2023-11-17T21:56:58Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-Llama2-13b", "base_model:finetune:NousResearch/Nous-Hermes-Llama2-13b", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T16:11:12Z
--- license: mit base_model: NousResearch/Nous-Hermes-Llama2-13b tags: - generated_from_trainer model-index: - name: ZeroShot-Multilanguage-3.0-noquantpeft-Llama2-13B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-Multilanguage-3.0-noquantpeft-Llama2-13B This model is a fine-tuned version of [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3500 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3
migtissera/Synthia-70B-v1.2b
migtissera
2023-11-17T21:32:48Z
1,488
28
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "arxiv:2302.13971", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-10T02:51:23Z
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- Change from 1.2 -> 1.2b: More data, 14 days of training for 1 epoch. All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` # Synthia-70B-v1.2b SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> ![Synthia](https://huggingface.co/migtissera/Synthia-13B/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B-v1.2b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|68.77| |*hellaswag*|acc_norm|87.57| |*mmlu*|acc_norm|68.81| |*truthfulqa_mc*|mc2|57.69| |**Total Average**|-|**70.71**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B-v1.2b" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B-v1.2b, author = {Migel Tissera}, title = {Synthia-70B-v1.2b: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ## Example Output ``` You: What is Earth's magnetic field like? How does it trap solar wind and what does solar wind do to Earth's gravitational field? Earth's magnetic field is an intricate and fascinating phenomenon. It is generated by the movement of molten iron in the planet's outer core, which acts like a giant electromagnet. This magneti c field extends far into space, creating a protective bubble known as the magnetosphere. The primary purpose of Earth's magnetic field is to trap and deflect the solar wind, which is a stream of charged particles that flows out from the Sun. The magnetic field acts as a shield, def lecting the solar wind and protecting our planet from its potentially harmful effects. When the solar wind reaches Earth's magnetosphere, it interacts with the magnetic field and creates a complex interplay of forces. This interaction can lead to various phenomena, such as aurora s, geomagnetic storms, and the formation of radiation belts. The interaction between the solar wind and Earth's magnetic field does not directly affect the planet's gravitational field. The gravitational field refers to the force that arises due to the m ass of the planet, and it is unaffected by the magnetic field or the solar wind. However, the magnetic field does play a role in trapping and deflecting charged particles from the solar wind, w hich can contribute to the overall complexity of Earth's space environment. You: Can there be space vehicles that uses plasma to traverse? This is hypothetical, but can UFOs use some force field generated by plasma, that then interacts with the gravitational/magnetic f ields of Earth for travelling at hypersonic speeds? While it is an interesting concept, there is currently no concrete evidence or scientific consensus that suggests the existence of UFOs or the use of plasma-based propulsion systems in spacecra ft. However, it is worth noting that there are ongoing scientific investigations and research into developing advanced propulsion systems for space travel, including plasma propulsion. Plasma is a state of matter where electrons are stripped from atoms, resulting in a highly charged gas. Plasma propulsion systems use electromagnetic fields to accelerate ions, creating a thrus t that can potentially propel spacecraft. Plasma propulsion systems offer several advantages over traditional chemical rockets, such as higher efficiency, lower fuel requirements, and the abili ty to operate continuously for long periods. While it is theoretically possible that a space vehicle could use plasma to traverse, it is important to note that the interaction between plasma and Earth's gravitational/magnetic fields is co mplex and not fully understood. The gravitational field acts as a force that attracts objects towards the center of the Earth, while the magnetic field deflects charged particles. Plasma-based propulsion systems would need to overcome these forces and potentially harness them for propulsion. In summary, while the possibility of using plasma propulsion for space travel is an active area of research, there is currently no scientific evidence to support the existence of UFOs or the us e of plasma-based propulsion systems for hypersonic travel. You: Keep in mind that science is always evolving, and there are many things that have not yet been invented. Can you write the equations for plasma based propoulsion for space travel? I apologize for any confusion in my previous response. To clarify, I am not an expert in plasma physics or propulsion systems. However, I can provide a general overview of the principles involv ed in plasma-based propulsion. Plasma propulsion systems typically use electromagnetic fields to accelerate ions, creating a thrust that can propel spacecraft. There are several different types of plasma propulsion systems, including electrostatic, electromagnetic, and magnetohydrodynamic (MHD) propulsion. In an electrostatic plasma propulsion system, a high voltage is applied to a pair of electrodes to create an electric field that accelerates charged particles. The resulting thrust is proportio nal to the applied voltage and the current density. The equation for the thrust is given by: Thrust = (Current density) x (Accelerating voltage) In an electromagnetic plasma propulsion system, a magnetic field is used to accelerate charged particles. The thrust is proportional to the product of the magnetic field strength and the curren t density. The equation for the thrust is given by: Thrust = (Magnetic field strength) x (Current density) In a MHD plasma propulsion system, a magnetic field is used to deflect a flow of plasma, creating a thrust. The thrust is proportional to the product of the magnetic field strength, the current density, and the flow velocity. The equation for the thrust is given by: Thrust = (Magnetic field strength) x (Current density) x (Flow velocity) It is important to note that these equations are simplified models and do not capture the full complexity of plasma propulsion systems. The actual performance of a plasma propulsion system woul d depend on various factors, including the specific design, the plasma properties, and the operating conditions. Furthermore, the development of plasma propulsion systems for space travel is an active area of research and development. While the principles outlined above provide a general framework, the sp ecific equations and designs would depend on the particular application and the research team involved. You: If the spacecraft has the shape of an orb, and a plasma field is created around the orb, how can this plasma field generate the required thrust for propulsion? The shape of the spacecraft alone does not determine the thrust generated by a plasma propulsion system. The thrust is primarily determined by the electromagnetic fields used to accelerate char ged particles. To generate thrust using a plasma field, a spacecraft would typically use one of the plasma propulsion systems described in my previous response. For example, in an electrostatic plasma propuls ion system, a high voltage is applied to a pair of electrodes to create an electric field that accelerates charged particles. The resulting thrust is proportional to the applied voltage and the current density. In this scenario, the plasma field would be generated by applying a high voltage to the electrodes. The shape of the spacecraft, whether it is an orb or any other shape, would not directly affe ct the generation of the plasma field or the resulting thrust. However, the shape of the spacecraft could have an indirect impact on the overall performance of the plasma propulsion system. For example, the shape of the spacecraft could affect the distribu tion of the plasma flow, which could in turn affect the efficiency of the propulsion system. In summary, the shape of the spacecraft itself does not directly determine the thrust generated by a plasma propulsion system. The thrust is primarily determined by the electromagnetic fields u sed to accelerate charged particles, and the shape of the spacecraft could have an indirect impact on the overall performance of the propulsion system. You: What's the closest tech that we have with plasma propulsion? There are several ongoing research and development efforts related to plasma propulsion systems for space travel. One of the most advanced plasma propulsion systems in use today is the Hall thr uster. A Hall thruster is a type of electrostatic propulsion system that uses an electric field to accelerate charged particles. It has been used on several space missions, including Deep Space 1, Daw n, and BepiColombo. The Hall thruster offers several advantages over traditional chemical rockets, such as higher efficiency, lower fuel requirements, and the ability to operate continuously fo r long periods. While the Hall thruster is currently the most advanced plasma propulsion system in use, there are ongoing efforts to develop even more advanced systems. For example, researchers are exploring t he use of magnetohydrodynamic (MHD) propulsion systems, which use a magnetic field to deflect a flow of plasma and create thrust. MHD propulsion systems offer the potential for even higher perf ormance than Hall thrusters, but they are still at an early stage of development. In summary, the Hall thruster is currently the closest technology to plasma propulsion that we have in use, but ongoing research and development efforts are exploring even more advanced systems such as MHD propulsion. Follow-on: What are the challenges associated with developing and using plasma propulsion systems for space travel, and how are researchers addressing these challenges? You: What are the physics equations for magnetohydrodynamic propulsion systems? Be detailed as possible. Magnetohydrodynamic (MHD) propulsion systems use a magnetic field to deflect a flow of plasma and create thrust. The equations that describe the physics of MHD propulsion systems are complex and involve several variables, including the magnetic field strength, the current density, the flow velocity, and the plasma properties. One of the key equations for MHD propulsion systems is the Ohm's law, which relates the electric current density to the applied electric field and the resistivity of the plasma. The equation is given by: J = σE where J is the current density, σ is the electrical conductivity, and E is the applied electric field. Another important equation is the Navier-Stokes equation, which describes the flow of the plasma. The equation is given by: µ(∂v/∂t) = -∇p + ∇xF + ∇xσT + G where µ is the dynamic viscosity, v is the velocity vector, p is the fluid pressure, F is the body force, σT is the stress tensor, and G is the body force due to the Lorentz force. The Lorentz force is a force that acts on a charged particle due to its interaction with a magnetic field. The equation for the Lorentz force is given by: F = q(E + v x B) where F is the force, q is the charge of the particle, E is the electric field, v is the velocity of the particle, and B is the magnetic field. These equations are just a few examples of the many equations that describe the physics of MHD propulsion systems. The actual equations and analysis would depend on the specific design and operating conditions of the propulsion system. In summary, the equations for MHD propulsion systems are complex and involve several variables, including the electric current density, the applied electric field, the fluid pressure, the fluid velocity, the magnetic field, and the Lorentz force. These equations are used to model and analyze the performance of MHD propulsion systems for space travel. You: Okay. Can you make the connection to string theory here? Certainly. String theory is a theoretical framework in physics that seeks to unify the fundamental forces of nature, including gravity, electromagnetism, and the strong and weak nuclear forces. In string theory, the fundamental constituents of matter are not point-like particles, but rather tiny vibrating strings. In the context of plasma propulsion systems, there is a connection to string theory through the concept of magnetohydrodynamics (MHD). MHD describes the flow of electrically conductive fluids, such as plasma, in the presence of magnetic fields. The equations that describe MHD are derived from the equations of classical electromagnetism, which are also used in string theory to describe the behavior of fundamental particles. In particular, the Lorentz force, which is a key concept in MHD and plasma propulsion systems, is also a fundamental concept in string theory. The Lorentz force acts on charged particles due to their interaction with electromagnetic fields, and it plays a crucial role in the behavior of strings in string theory. Furthermore, some aspects of string theory suggest that the fundamental constituents of matter, including plasma, may have additional properties beyond those described by classical physics. For example, string theory predicts the existence of extra dimensions beyond the three spatial dimensions we are familiar with, and these extra dimensions may have an impact on the behavior of plasma and other forms of matter. In summary, while there is no direct connection between plasma propulsion systems and string theory, there is an indirect connection through the use of the equations of classical electromagnetism, which are also used in string theory. Additionally, some aspects of string theory suggest that the fundamental constituents of matter may have additional properties beyond those described by classical physics. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-70B-v1.2b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 64.63 | | ARC (25-shot) | 68.77 | | HellaSwag (10-shot) | 87.57 | | MMLU (5-shot) | 68.81 | | TruthfulQA (0-shot) | 57.69 | | Winogrande (5-shot) | 83.9 | | GSM8K (5-shot) | 35.25 | | DROP (3-shot) | 50.41 |
zhugez/ppo-LunarLander-v2
zhugez
2023-11-17T21:32:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-17T20:57:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 182.80 +/- 92.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
migtissera/Synthia-70B-v1.1
migtissera
2023-11-17T21:32:31Z
1,484
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "arxiv:2302.13971", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-28T22:21:49Z
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- # Synthia-70B-v1.1 SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. This model has generalized "Tree of Thought" reasoning capabilities. Evoke it with the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning ``` <br> ![Synthia](https://huggingface.co/migtissera/Synthia-70B-v1.1/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|70.05| |*hellaswag*|acc_norm|87.12| |*mmlu*|acc_norm|70.34| |*truthfulqa_mc*|mc2|57.84| |**Total Average**|-|**71.34**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B, author = {Migel Tissera}, title = {Synthia-70B: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ## Example Output ``` You: What is Earth's atmosphere like? What is Earth's magnetic field like? What is the relationship between the two? Earth's atmosphere is a layer of gases that surrounds the planet, held in place by Earth's gravity. It is primarily composed of nitrogen (78.1%), oxygen (20.9%), argon (0.9%), and other trace gases, such as carb on dioxide and water vapor. The atmosphere plays a crucial role in regulating Earth's temperature and climate by trapping heat and absorbing harmful radiation from the sun. It also provides the air we breathe, p rotects us from meteorites and the sun's ultraviolet radiation, and allows for the development of life on the planet. Earth's magnetic field is a phenomenon generated by the convection of molten iron in the planet's outer core, which creates an electric current. This current, in turn, produces a magnetic field that extends from the Earth's interior to space, protecting the planet from the solar wind and cosmic radiation. The magnetic field also plays a role in the formation of Earth's Van Allen radiation belts, which trap high-energy particles from the sun and cosmic rays. The relationship between Earth's atmosphere and magnetic field is complex and multifaceted. The magnetic field influences the distribution of charged particles in the atmosphere, affecting auroras, radio communi cation, and the ionosphere's behavior. The atmosphere, in turn, plays a role in the distribution of the magnetic field's energy, as well as in the generation and loss of atmospheric ions. The interactions betwee n the atmosphere and the magnetic field also influence the dynamics of Earth's climate and the long-term evolution of the planet's environment. You: Elaborate the the complex relationship between the two The complex relationship between Earth's atmosphere and magnetic field can be elaborated upon in several ways. Firstly, the magnetic field influences the distribution of charged particles in the atmosphere. For example, the magnetic field deflects the solar wind particles that would otherwise strip away the atmosphere. This deflection creates a bow shock, where the solar wind particles slow down and dissipate their ene rgy, producing a shock wave that propagates downstream from the Earth. The magnetic field also guides charged particles from the solar wind into the inner magnetosphere, where they become trapped and produce the Van Allen radiation belts. Secondly, the atmosphere plays a role in the distribution of the magnetic field's energy. The ionosphere, which is the upper layer of the atmosphere that is ionized by solar radiation, is affected by the magneti c field. The energy from the magnetic field is transferred to the ionosphere, where it drives the motion of the charged particles and affects the propagation of radio waves. Thirdly, the interactions between the atmosphere and the magnetic field influence the dynamics of Earth's climate. The magnetic field affects the distribution of charged particles in the atmosphere, which in tur n affects the formation of clouds and precipitation. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation and the distribution o f temperatures around the planet. Finally, the interactions between the atmosphere and the magnetic field play a role in the long-term evolution of the planet's environment. The magnetic field protects the atmosphere from the erosive effects of the solar wind, which would otherwise strip away the atmosphere over time. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation and the distribution of temperatures around the planet. Over time, these interactions can shape the evolution of the planet's climate and environment. You: Please explain the bow shock wave, Van Allen radiation belts in detail The bow shock wave is a phenomenon that occurs when the solar wind, a stream of charged particles emitted by the sun, interacts with Earth's magnetic field. The solar wind particles are moving at high speeds, an d when they encounter the Earth's magnetic field, they slow down and dissipate their energy. This creates a shock wave that propagates downstream from the Earth. The bow shock wave is located about 15 Earth radi i (approximately 93,000 miles) upstream from Earth, and it marks the boundary between the solar wind and Earth's magnetosphere. The Van Allen radiation belts are zones of high-energy charged particles, primarily protons and electrons, that are trapped by Earth's magnetic field. The belts are divided into two main regions: the inner belt, which extends from about 1,000 to 6,000 kilometers above Earth's surface, and the outer belt, which extends from about 13,000 to 65,000 kilometers above the surface. The particles in the belts are trapped by Ea rth's magnetic field, and they can remain in the belts for long periods of time. The particles in the Van Allen belts are generated by a variety of processes, including the impact of solar wind particles on the Earth's atmosphere and the decay of radioactive elements in the planet's crust. T he belts are important for understanding the behavior of charged particles in Earth's magnetosphere and for studying the effects of space weather on human activities in space. They can also pose a hazard to spac ecraft and satellites, as the high-energy particles in the belts can damage sensitive electronics and cause other problems. You: Does the Van Allen belt present any danger to Geo stationary satellites? Where are these belts? Are they near any of the orbits? Yes, the Van Allen radiation belts can present a danger to geostationary satellites and other spacecraft. The high-energy charged particles in the belts can damage sensitive electronics and cause other problems, including radiation damage and single-event upsets. Geostationary satellites are located at an altitude of approximately 35,786 kilometers above Earth's surface, which is just outside the inner belt but still w ithin the outer belt. The Van Allen radiation belts are located in Earth's magnetosphere, which is the region of space around the planet where the magnetic field is dominant. The inner belt is located between about 1,000 and 6,000 ki lometers above the surface, while the outer belt is located between about 13,000 and 65,000 kilometers above the surface. The belts are not located in any specific orbit, but they are present throughout the magn etosphere, which covers a large portion of the space around the planet. To mitigate the effects of the Van Allen belts on spacecraft, engineers use a variety of techniques, including shielding and designing electronics to be resistant to radiation. Additionally, some spacecraft are designed to operate in orbits that avoid the Van Allen belts, while others are designed to operate within the belts and withstand the effects of the high-energy particles. You: So when we send spacecraft to the moon, say the ones carrying humans, how do we make sure that the solar wind charged particles will not affect the mission? Are there very specific remedies that we can take ? When sending spacecraft to the moon, there are several steps that can be taken to protect the spacecraft and its occupants from the effects of solar wind charged particles. One approach is to design the spacecra ft with a strong magnetic field, which can help deflect the charged particles away from the spacecraft. Another approach is to use radiation shielding, which can help protect the electronics and any human occupa nts from the effects of radiation. In addition to these measures, space agencies can also use data from satellites and other spacecraft to monitor the level of solar wind activity and predict when there may be an increased risk of charged particl e events. This information can be used to adjust the timing of missions and other activities to minimize exposure to high-energy particles. It's also important to note that the moon is outside the magnetosphere and therefore not directly affected by the Van Allen belts. However, the moon does receive a significant amount of radiation from the sun an d other sources, so measures to protect the spacecraft and its occupants from radiation exposure are still necessary. Overall, protecting spacecraft and humans from the effects of solar wind charged particles requires a combination of engineering and operational measures, including strong magnetic fields, radiation shielding, a nd careful mission planning. You: Does spacecraft actually create strong magnetic fields? Which Apollo spacecrafts did this? Or is this hypothetical? Yes, spacecraft can create strong magnetic fields, although this is not typically a primary design consideration for most spacecraft. Some spacecraft, such as those designed for plasma physics research, may incl ude a magnetic field generator to create controlled conditions for experiments. However, for most spacecraft, the focus is on using existing magnetic fields, such as those generated by the Earth's magnetic field , to protect the spacecraft and its occupants. The Apollo spacecraft did not create strong magnetic fields, as their primary focus was on propulsion, navigation, and life support systems for the astronauts. However, some of the Apollo missions did include sc ientific experiments to study the effects of solar wind and other charged particles on the spacecraft and its occupants. The use of magnetic fields to protect spacecraft and humans from the effects of radiation is an ongoing area of research and development, and there are a number of proposals and concepts for how this might be ac hieved. For example, some researchers have proposed using superconducting coils to create a magnetic field around the spacecraft, which could help deflect charged particles and other forms of radiation. However, these proposals are still in the early stages of development and are not yet ready for implementation on spacecraft. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-70B-v1.1) | Metric | Value | |-----------------------|---------------------------| | Avg. | 62.84 | | ARC (25-shot) | 70.05 | | HellaSwag (10-shot) | 87.12 | | MMLU (5-shot) | 70.34 | | TruthfulQA (0-shot) | 57.84 | | Winogrande (5-shot) | 83.66 | | GSM8K (5-shot) | 31.84 | | DROP (3-shot) | 39.02 |
migtissera/Synthia-70B
migtissera
2023-11-17T21:32:19Z
1,483
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "arxiv:2302.13971", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-22T19:51:32Z
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- # Synthia-70B SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> ![Synthia](https://huggingface.co/migtissera/Synthia-70B/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|0.6945| |*hellaswag*|acc_norm|0.8711| |*mmlu*|acc_norm|0.6891| |*truthfulqa_mc*|mc2|0.5979| |**Total Average**|-|**0.7132**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B, author = {Migel Tissera}, title = {Synthia-70B: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-70B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 60.29 | | ARC (25-shot) | 69.45 | | HellaSwag (10-shot) | 87.11 | | MMLU (5-shot) | 68.91 | | TruthfulQA (0-shot) | 59.79 | | Winogrande (5-shot) | 83.66 | | GSM8K (5-shot) | 31.39 | | DROP (3-shot) | 21.75 |
migtissera/SynthIA-7B-v1.5
migtissera
2023-11-17T21:31:42Z
1,590
4
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-07T23:16:18Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en library_name: transformers --- <br> ![Synthia](https://huggingface.co/migtissera/Synthia-13B/resolve/main/Synthia.jpeg) <br> ## Example Usage ### Prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Code example: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/SynthIA-7B-v1.5" output_file_path = "./SynthIA-7B-v1.5-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__SynthIA-7B-v1.5) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.8 | | ARC (25-shot) | 62.71 | | HellaSwag (10-shot) | 83.37 | | MMLU (5-shot) | 63.48 | | TruthfulQA (0-shot) | 51.32 | | Winogrande (5-shot) | 79.24 | | GSM8K (5-shot) | 17.44 | | DROP (3-shot) | 26.01 |
migtissera/Synthia-13B-v1.2
migtissera
2023-11-17T21:31:23Z
1,463
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "arxiv:2302.13971", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-23T19:08:56Z
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- Change from Synthia-13B -> Synthia-13B-v1.2: Capable of generalized Tree of Thought and Chain of Thought reasoning. All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` # Synthia-13B-v1.2 SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> ![Synthia](https://huggingface.co/migtissera/Synthia-13B/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-13B-v1.2 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|TBC| |*hellaswag*|acc_norm|TBC| |*mmlu*|acc_norm|TBC| |*truthfulqa_mc*|mc2|TBC| |**Total Average**|-|**TBC**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-13B-v1.2" output_file_path = "./Synthia-13B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-13B-v1.2, author = {Migel Tissera}, title = {Synthia-13B-v1.2b: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ## Example Output ``` You: How does the solar wind affect the gravitational field of Earth? Does it affect the magnetic field? To understand the effects of the solar wind on the Earth's gravitational and magnetic fields, let's take a closer look at both fields and their relation to the solar wind using a Tree of Thoughts. 1. **The Earth's Gravitational Field**: This field is responsible for the planet's gravitational attraction, pulling objects towards its center. It is created by the mass of th e Earth and its distribution within the planet. 2. **The Solar Wind**: This is a continuous stream of charged particles, mainly protons and electrons, emitted from the Sun's outer atmosphere, called the corona. It travels th rough the solar system at high speeds. 3. **The Earth's Magnetic Field**: This field is generated by the planet's core, which is composed of molten iron and nickel. It protects the Earth from the solar wind and othe r cosmic radiation, but it is not strong enough to prevent all particles from reaching the Earth's surface. 4. **The Interaction between the Solar Wind and the Earth's Gravitational Field**: The solar wind cannot influence the Earth's gravitational field because the gravitational for ce is determined by the mass and distribution of matter within the planet, and the solar wind does not affect these factors. 5. **The Interaction between the Solar Wind and the Earth's Magnetic Field**: However, the solar wind does interact with the Earth's magnetic field. The magnetic field acts lik e a shield, repelling or deflecting the charged particles in the solar wind. This action creates a bow shock, a region where the solar wind slows down as it encounters the Eart h's magnetic field. 6. **Conclusion**: In conclusion, the solar wind does not affect the Earth's gravitational field, but it does influence the planet's magnetic field, through the creation of a b ow shock. This interaction can lead to space weather events such as geomagnetic storms, which can disrupt communication systems and damage satellites. You: Interesting! Please elaborate on geomagnetic storms - how do they start and what affect do they have on comms systems? Sure! Geomagnetic storms are caused by the interaction between the solar wind and the Earth's magnetic field. They can start when the solar wind carries a particularly strong f lux of charged particles, which can cause the Earth's magnetic field to compress and rebound. This compression and rebounding process can create geomagnetic storms, which can l ast for several hours or days. 1. **Effects of Geomagnetic Storms on Communication Systems**: Geomagnetic storms can cause a variety of effects on communication systems, including disruption to GPS signals, interference with radio communications, and power grid failures. This is because the solar wind particles can interact with the Earth's atmosphere and create electromagnetic cu rrents that can induce currents in power lines and damage electronic equipment. 2. **Preventative Measures**: To mitigate the effects of geomagnetic storms, technology companies and governments have developed systems to monitor the solar wind and provide e arly warnings of incoming storms. This information can be used to take preventative measures, such as reducing power consumption or rerouting communications through alternative routes. In addition, researchers are exploring ways to improve the Earth's magnetic field, such as creating artificial magnetic shields or modifying the composition of the planet's cor e. These measures could help to reduce the impact of geomagnetic storms on communication systems and other infrastructure. In conclusion, geomagnetic storms are a natural phenomenon that can have significant impacts on communication systems and other infrastructure. By monitoring the solar wind and developing preventative measures, we can help to minimize the disruptions caused by these storms and ensure the continued operation of critical systems. You: Thanks for that! Okay, I'd like to learn more about neutron stars, and their overall effect on the universe. What is the closest neutron star to us? Elaborate. Certainly! Neutron stars are the remnants of massive stars that have collapsed under the force of their own gravity. They are incredibly dense, with the mass of the sun squeeze d into an object with a diameter of about 20 kilometers. 1. **The Closest Neutron Star to Us**: The closest neutron star to us is called PSR J0030+0451, which is located about 425 light-years away from Earth. It is a pulsar, which me ans that it emits a beam of electromagnetic radiation that sweeps across the sky like a lighthouse. 2. **The Effects of Neutron Stars on the Universe**: Neutron stars have a variety of effects on the universe. They are important stars for understanding the evolution of stars and the universe as a whole. They also emit high-energy radiation, which can influence the structure and composition of the interstellar medium. In addition, neutron stars are a source of gravitational waves, which are ripples in the fabric of space-time. These waves are generated when two neutron stars collide, and the y can provide valuable information about the nature of these objects and the universe as a whole. 3. **Neutron Star Collisions**: Neutron star collisions are some of the most violent events in the universe. When two neutron stars spiral towards each other and collide, they can produce a variety of exotic phenomena, including gamma-ray bursts, gravitational waves, and even new elements. These collisions can also provide valuable information about the evolution of stars and the universe as a whole. In conclusion, neutron stars are incredibly dense and fascinating objects that have a variety of effects on the universe. They are important for understanding the evolution of stars and the universe as a whole, and they can provide valuable information about the nature of these objects and the universe as a whole. You: Okay, can you tell me about Wormholes? Sure! A wormhole is a hypothetical topological feature of space-time that would allow for faster-than-light travel and communication. It is essentially a shortcut through space -time that connects two distant regions of the universe. 1. **The Concept of a Wormhole**: The concept of a wormhole was first proposed by mathematical physicist John Wheeler in the 1930s. He imagined a tunnel connecting two distant regions of space-time that allowed for faster-than-light travel. 2. **Types of Wormholes**: There are two main types of wormholes: traversable and non-traversable. A traversable wormhole would allow for faster-than-light travel, while a non- traversable wormhole would not. 3. **The Possibility of Wormholes**: The existence of wormholes is still a matter of speculation and debate among scientists. Some theories suggest that wormholes could exist a s a result of the evolution of black holes or the existence of exotic matter with negative energy density. 4. **Potential Applications**: If wormholes could be created and stabilized, they could have a variety of potential applications, including faster-than-light travel and communi cation, which could revolutionize our understanding of the universe and our ability to explore it. 5. **Challenges to Wormhole Creation**: However, creating and stabilizing a wormhole would require a significant amount of energy and would likely require the manipulation of e xotic matter with negative energy density. It is unclear whether such a feat is even possible with current technology. In conclusion, wormholes are a fascinating concept that could have significant implications for our understanding of the universe and our ability to explore it. While their exi stence is still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and stabilizing a wormhole for practi cal applications. Follow-on: What are the challenges to creating and stabilizing a wormhole? You: Elaborate on "exotic matter with negative energy density" Sure! Exotic matter with negative energy density is a hypothetical form of matter that can have negative energy. This means that it would have a gravitational attraction that i s stronger than the repulsive force of positive energy matter, which could allow for the creation of wormholes. 1. **Theoretical Explanation**: The concept of exotic matter with negative energy density is based on theoretical models of the universe that suggest that negative energy densi ty could exist in the form of a hypothetical particle called a tachyon. Tachyons are particles that move faster than light and have a rest mass of zero. 2. **Potential Applications**: If exotic matter with negative energy density could be created and stabilized, it could have a variety of potential applications, including the c reation of wormholes and the manipulation of gravity itself. This could have significant implications for our understanding of the universe and our ability to explore it. 3. **Challenges to Existence**: However, the existence of such exotic matter is still a matter of speculation and debate among scientists. There is no direct evidence of its ex istence, and there are significant challenges to creating and stabilizing such matter. 4. **Search for Exotic Matter**: Despite these challenges, scientists are actively searching for evidence of exotic matter with negative energy density, including through exper iments that aim to create and detect such particles. In conclusion, exotic matter with negative energy density is a fascinating concept that could have significant implications for our understanding of the universe and our abilit y to manipulate it. While its existence is still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and stabilizing such matter for practical applications. Follow-on: What are some of the challenges to creating and stabilizing exotic matter with negative energy density? You: Can you provide the physics equations for "exotic matter with negative energy density"? Sure, but I'm afraid I won't be able to provide the exact physics equations for exotic matter with negative energy density, as it is still a matter of speculation and debate am ong scientists. However, I can provide some general principles related to the concept. It is important to note that negative energy density is a theoretical concept, and there is no direct evidence of its existence. However, some theories suggest that negative en ergy density could exist in the form of a hypothetical particle called a tachyon. Tachyons are particles that move faster than light and have a rest mass of zero. The concept of tachyons is based on the theory of special relativity, which states that the speed of light is the universal speed limit for all particles. According to this the ory, particles with a mass greater than zero cannot travel faster than the speed of light. However, the concept of tachyons introduces particles that have a mass of zero or les s, which could allow them to travel faster than the speed of light. The concept of exotic matter with negative energy density is related to the idea of negative energy density, which suggests that particles could have a negative energy density, meaning they would have a gravitational attraction that is stronger than the repulsive force of positive energy matter. This could have significant implications for our unders tanding of the universe and our ability to manipulate it. In conclusion, while the exact physics equations for exotic matter with negative energy density are still a matter of speculation and debate, the concept is based on the princi ples of special relativity and the possibility of particles moving faster than the speed of light. Future research and exploration may help to shed light on the possibility of creating and stabilizing such matter for practical applications. Follow-on: What are some of the practical applications of exotic matter with negative energy density if it could be created and stabilized? You: What are some of the practical applications of exotic matter with negative energy density if it could be created and stabilized? If exotic matter with negative energy density could be created and stabilized, it could have a variety of potential practical applications. Here are a few examples: 1. Wormholes: One of the most fascinating potential applications of exotic matter with negative energy density is the creation of wormholes, which could allow for faster-than-l ight travel and communication. If wormholes were stable and could be created on demand, they could revolutionize our understanding of the universe and our ability to explore it . 2. Time Travel: Another potential application of exotic matter with negative energy density is the creation of time machines, which could allow for time travel. While the conce pt of time travel is still a matter of speculation and debate, the idea of negative energy density could potentially be used to create a device that could manipulate the fabric of space-time and allow for travel back in time. 3. Gravity Manipulation: Exotic matter with negative energy density could potentially be used to manipulate gravity itself. By creating a negative energy density in a given reg ion of space-time, it may be possible to create a gravitational field that is stronger or weaker than the surrounding region. This could have significant implications for our a bility to manipulate objects and structures in space, as well as our understanding of the nature of gravity itself. 4. Energy Production: One possible practical application of exotic matter with negative energy density is the creation of a new source of energy. If such matter could be create d and controlled, it could be used to generate large amounts of energy in a variety of applications, including power generation, transportation, and space exploration. In conclusion, exotic matter with negative energy density could have a wide range of potential practical applications if it could be created and stabilized. While the concept i s still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and manipulating such matter for practical ap plications. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-13B-v1.2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.56 | | ARC (25-shot) | 61.26 | | HellaSwag (10-shot) | 82.93 | | MMLU (5-shot) | 56.47 | | TruthfulQA (0-shot) | 47.27 | | Winogrande (5-shot) | 76.48 | | GSM8K (5-shot) | 10.99 | | DROP (3-shot) | 25.48 |
ZivK/q-FrozenLake-v1-4x4-noSlippery
ZivK
2023-11-17T21:31:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-17T21:31:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ZivK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Undi95/MLewd-L2-13B
Undi95
2023-11-17T21:30:51Z
1,513
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-04T00:06:59Z
--- license: cc-by-nc-4.0 --- MLewd is a model created to be... Lewd. That's all. Based on ReMM. There was so much attempt on this model that I can't count them all. Bear with me lmao. The OG plan: https://pastebin.com/hfJ80rKL Command useds and explaination : ```shell Due to hardware limitation, some merge was done in 2 part. Last mix : - ReMM (Base) (0.57) - Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (Llama Chat Uncensored) (0.35) - KoboldAI/LLAMA2-13B-Holodeck-1 (0.08) Part 1: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewdBase-L2-13B-part1 --merge Undi95/ReMM-L2-13B --density 0.88 --merge KoboldAI/LLAMA2-13B-Holodeck-1 --density 0.12 --cuda Part 2: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewdBase-L2-13B --merge Undi95/MLewdBase-L2-13B-part1 --density 0.65 --merge Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged --density 0.35 --cuda (MLewd-L2-13B-v1-2 got disqualified) - Applying LoRA: nRuaif/Kimiko-v2-13B at (0.24) weight on MLewd-L2-13B-v1-1 => Result: MLewd-L2-13B-v1-3 ================== ERP RANKING TEST =========================== 19.42 | MLewd-L2-13B-v1-3.q5_K_M.gguf (-> Best) 19.25 | MLewd-L2-13B-v1-1.q5_K_M.gguf 18.25 | MLewd-L2-13B-v1-2.q5 K M.gguf ================== RETRY =========================== Mix: - Undi95/MLewd-L2-13B-v1-3 (0.82) - Sao10K/Stheno-Inverted-L2-13B (0.18) !python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewd-L2-13B-v1-7 --merge Undi95/MLewd-L2-13B-v1-3 --density 0.82 --merge Sao10K/Stheno-Inverted-L2-13B --density 0.18 --cuda => Result: MLewd-L2-13B-v1-7 Final touch (trying my best here) : MLewd-L2-13B-v1-7 (0.77) + zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b (LoRA 0.23) => MLewd-L2-13B-v1-7-TRY2 FINAL : MLewd-L2-13B-v1-7-TRY2 (0.82) + BluemoonRP (0.18) => MLewd-L2-13B-v1-8-3 RIP to all the version that got trashed. ``` <!-- description start --> ## Description This repo contains fp16 files of MLewd-L2-13B, a trying-to-be lewd LLM model. <!-- description end --> <!-- description start --> ## Models used - Undi95/ReMM (Base) - Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (Llama Chat Uncensored) - KoboldAI/LLAMA2-13B-Holodeck-1 - Sao10K/Stheno-Inverted-L2-13B ## Loras used - nRuaif/BluemoonRP-L2-13B-This-time-will-be-better/tree/main/lora-out-13b-final-BM/checkpoint-15/adapter_model - zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi kek # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__MLewd-L2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.84 | | ARC (25-shot) | 58.28 | | HellaSwag (10-shot) | 82.32 | | MMLU (5-shot) | 54.67 | | TruthfulQA (0-shot) | 48.66 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 1.29 | | DROP (3-shot) | 9.18 |
TheBloke/alfred-40B-1023-GPTQ
TheBloke
2023-11-17T21:30:18Z
26
3
transformers
[ "transformers", "safetensors", "RefinedWeb", "text-generation", "falcon-40b", "long-context", "falcon", "NTK-YaRN", "custom_code", "en", "fr", "de", "es", "it", "dataset:OpenAssistant/oasst1", "dataset:ehartford/dolphin", "dataset:tau/sled", "dataset:tiiuae/falcon-refinedweb", "arxiv:2306.15595", "arxiv:2309.00071", "arxiv:2307.03172", "base_model:lightonai/alfred-40b-1023", "base_model:quantized:lightonai/alfred-40b-1023", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-17T15:21:20Z
--- base_model: lightonai/alfred-40b-1023 datasets: - OpenAssistant/oasst1 - ehartford/dolphin - tau/sled - tiiuae/falcon-refinedweb inference: false language: - en - fr - de - es - it license: apache-2.0 model_creator: LightOn AI model_name: Alfred 40B 1023 model_type: falcon prompt_template: '<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant> ' quantized_by: TheBloke tags: - falcon-40b - long-context - falcon - NTK-YaRN thumbnail: images/alfred-40b-1023.png --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Alfred 40B 1023 - GPTQ - Model creator: [LightOn AI](https://huggingface.co/lightonai) - Original model: [Alfred 40B 1023](https://huggingface.co/lightonai/alfred-40b-1023) <!-- description start --> # Description This repo contains GPTQ model files for [LightOn AI's Alfred 40B 1023](https://huggingface.co/lightonai/alfred-40b-1023). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/alfred-40B-1023-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF) * [LightOn AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lightonai/alfred-40b-1023) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alfred ``` <start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant> ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 22.55 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 23.34 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 25.72 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 18.20 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 42.93 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 20.47 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 43.88 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/alfred-40B-1023-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/alfred-40B-1023-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `alfred-40B-1023-GPTQ`: ```shell mkdir alfred-40B-1023-GPTQ huggingface-cli download TheBloke/alfred-40B-1023-GPTQ --local-dir alfred-40B-1023-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir alfred-40B-1023-GPTQ huggingface-cli download TheBloke/alfred-40B-1023-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir alfred-40B-1023-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir alfred-40B-1023-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/alfred-40B-1023-GPTQ --local-dir alfred-40B-1023-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/alfred-40B-1023-GPTQ`. - To download from a specific branch, enter for example `TheBloke/alfred-40B-1023-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `alfred-40B-1023-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/alfred-40B-1023-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/alfred-40B-1023-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: LightOn AI's Alfred 40B 1023 # Model Card for Alfred-40B-1023 ![a witty and elegant butler with a falcon on his shoulder, smile, flat illustration, simple shapes, colorful, lo-fi aesthetics](images/alfred-40b-1023.png) `Alfred-40B-1023` is a finetuned version of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), with an **extended context length of 8192 tokens**. Finetuning was performed in October 2023. `Alfred-40B-1023` is made available under the Apache 2.0 License. ## Model Details ### Model Description - **Developed by:** [LightOn](https://www.lighton.ai/) * [Oskar Hallström](https://huggingface.co/ohallstrom) (project lead, training & modeling, internal long context data, evaluation) * [Amélie Chatelain](https://huggingface.co/ameliechatelain) (internal data & long context data, data generation) * [Clément Thiriet](https://huggingface.co/cthiriet) (data infrastructure, data generation, evaluation) * [Julien Séailles](https://huggingface.co/Jseailleslighton) (data generation) * [Adrien Cavaillès](https://huggingface.co/adcavail) (data generation) * [Axel Marmet](https://huggingface.co/WeightsnWizardry)* (training 2K baseline) `*` work done while at LightOn - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **License:** Apache 2.0 license. - **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) - **Training date:** October 2023 (`1023`). ## Uses ### Direct Use `Alfred-40B-1023` can be used as a chat model or as an instruct model. For both instruct and chat mode, the model has been trained with chat tokens `<start_system>`, `<start_user>`, `<start_assistant>`, and `<end_message>`. These can be integrated into the prompt in the follwoing way: ``` <start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{user query}<end_message><start_assistant> ``` The stop word `<end_message>` should be used. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations `Alfred-40B-1023` is a finetune of Falcon-40B. As such, it is trained mostly on English, German, Spanish, French, with limited capabilities also in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of `Alfred-40B-1023` to implement appropriate guardrails and precautions in any production use. ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "lightonai/alfred-40b-1023" tokenizer = AutoTokenizer.from_pretrained("lightonai/alfred-0923-tokenizer") pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>Write me an email to my boss, explaining how the company could benefit by using LightOns platform for Large Language Models, Paradigm.<end_message><start_assistant>", max_length=1000, do_sample=True, top_k=3, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Alfred-40B-1023 was trained on a mixture of publicly available and in-house curated datasets. The training data is composed of 50 % short context tasks, 45 % long context tasks and 5 % RefinedWeb. | **Short context sources** | |--------------------| | [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) | | [dolphin](https://huggingface.co/ehartford/dolphin) | | [openai-critiques](https://openaipublic.blob.core.windows.net/critiques/README.md) | | internal | `internal` is a collection of synthetic and human-generated datasets created by Ligthon, tailored towards the use cases of our clients. | **Long context sources** | |--------------------| | [sled](https://huggingface.co/datasets/tau/sled) | | internal-long-context | `internal-long-context` is a collection of synthetic datasets generated by LightOn, tailored towards the use cases of our clients. During training, we apply regular language modeling loss for a partition of the prompts in the long context data. | **Pretraining objective source** | |--------------------| | [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | ### Training Procedure `Alfred-40B-1023` was trained on 128 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=2, DP=8) combined with ZeRO. Alfred has been trained through supervised finetuning on 100 megatokens, with a learning rate decayed with a cosine schedule. #### Preprocessing All datasets have been filtered, up or downsampled, and adapted to our chat token format. #### Context length extension We extend the context length to 8K with a custom method that we name NTK-YaRN. As guessable from its name, our extension method draws inspiration from NTK-aware interpolation and YaRN. During our context length extension efforts, we experimented with various methods suitable for RoPE embeddings. These include vanilla [positional interpolation](https://arxiv.org/abs/2306.15595), [NTK-aware interpolation](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/), [NTK-by-parts](https://github.com/jquesnelle/scaled-rope/pull/1), and lastly [YaRN](https://arxiv.org/abs/2309.00071). YaRN looked very promising when applied at test-time, however finetuning with YaRN was not successful in our experiments. When extending the context length at training-time, NTK-aware interpolation was the most successful out of the already existing methods. Some of our results from trying different long context extension methods are shared in the Evaluation section below. We acknowledge that the same parameter values as proposed in the YaRN-paper have been used in our YaRN experiments, and that these potentially could have other optimal values for our particular setup. ##### NTK-YaRN Similarly to NTK-aware interpolation (`NTK`), NTK-YaRN involves increasing the base of the RoPE embeddings. In the original implementation of NTK-aware interpolation the new base `b'` is adapted according to the following formula: $$ b' = b \times s^{\frac{|D|}{|D|-2}} $$ where `b` is the original base, `s` the scaling factor of the context length, and `|D|` the model's head dimension. However, we find (similar to other actors) that increasing the base slightly more is even better. The value of `b'` could probably be optimized even further, but for these experiments we have settled with the following value: $$ b' = b \times (s+1)^{\frac{|D|}{|D|-2}} $$ In the following parts of this model card, context length extension with this extended scaling of the base is referred to as `NTK-Margin`. For `NTK-YaRN`, the extended scaling of the base is combined with the modification of the computation of the attention weights made in YaRN, where the query and key matrices are scaled by the factor `m`. $$ m = 1 + 0.1 \times \log(s) $$ Scaling the query and key matrices this way substantially reduces the initial grad norm when applying a context length extension method in our training runs. To cite NTK-YaRN, please refer to the model bibtex in the bottom of this model card. ## Evaluation ### Context length extension strategies #### Training losses After experimenting on a 7B scale, we finally run a selected partition of the extension methods on a 40B scale. In the figure below, we display the resulting training losses when training a 40B model with the different extension methods, ceteris paribus. ![Training loss curves for extension methods](images/training-loss-curves.png "Training loss curves for extension methods") Initially, YaRN has the lowest training loss, which can be seen as a reflection of the fact that YaRN was the most successful extension method at test time. However all the other methods surpasse YaRN in terms of training loss already after a handful of megatokens. Comparing NTK-Margin vs NTK-YaRN, we can note that the scaling of Q and K matrices makes the training loss lower in the beginning, however NTK-YaRN's advantage over NTK-Margin decreases as the training goes on. Comparing NTK-Margin with NTK in turn, it seems like the larger value of the base in NTK-Margin gives an initial boost in training loss, however this advantage decreases as training goes on. #### Performance on Long Context Benchmarks We evaluate the context length extension methods on an own benchmark, consisting of four tasks. * [Key-value retrieval UUID](https://arxiv.org/pdf/2307.03172.pdf) * [Coarse-grained Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) * [Fine-grained Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) * [Multi document retrieval data](https://nlp.stanford.edu/data/nfliu/lost-in-the-middle/nq-open-contriever-msmarco-retrieved-documents.jsonl.gz) For each task, we have created 3 subtasks - one for each of the three context lengths 2K, 4K and 8K. In total, we thus have 12 subtasks. In order to get an aggregated score that values each subtask equally, we normalize the scores for each subtask and then calculate the mean of the normalized scores for each extension method. ![Aggregated scores on long context benchmarks](images/lc_benchmarks.png "Aggregated scores on long context benchmarks") On these benchmarks, YaRN clearly lags behind. NTK-YaRN is the winning method, however NTK-Margin is so close that more extensive research is needed to verify that NTK-YaRN really is superior to NTK-Margin, especially when trained for longer. ### Comparison to 2K baseline In order to track any potential degradation on 2K context tasks due to the context length extension, we compare our 8K model against a 2K model trained in a similar setup for 100 megatokens. When training the 2K baseline, we don't include any long context data. We conduct the comparison by evaluating the models on a selection of tasks from EleutherAI harness, as well as ranking model outputs internally. ![Evaluation of 2K vs 8K version of alfred-40b-2023](images/2k_vs_8k.png "Evaluation of 2K vs 8K version of alfred-40b-2023") Notably, our 8K model not only performs on par with our 2K model on most of our EleutherAI harness tasks, in fact it outperforms the 2K model on a majority of the tasks. Reading comprehension is the only subcategory for which our 8K model is outperformed by the 2K model. We recognize that there is a discrepancy between performance on classical NLP benchmarks and how humans perceive the model quality. When model outputs (limited to 2K context lengths) are ranked by LightOn employees internally, the 2K and 8K have strikingly similar performance. However, a few rare failure modes have been noted for the 8K version, which are not seen when using the 2K model. These failure modes are likely to be fixable with better composition of the long context data. ## Compute Infrastructure ### Hardware Alfred-40B-1023 was trained on AWS SageMaker, on 128 A100 40GB GPUs in P4d instances. ### Software Alfred-40B-1023 was trained with a custom codebase. Training leverages a 3D parallelism approach combined with ZeRO, as well as high-performance kernels such as FlashAttention. ## Model Card Contact Please open a Community Discussion for any support request related to using Alfred with HuggingFace transformers. For any other inquiry: contact@lighton.ai ## Citation If you find the model useful in your work, please use the following bibtex when citing. ``` @article{alfred-40b-1023, title={Alfred-40B-1023}, author={Hallström, Oskar and Chatelain, Amélie and Thiriet, Clément and Séailles, Julien and Cavaillès, Adrien and Marmet, Axel}, year={2023} } ```
SimplCup/Marmok
SimplCup
2023-11-17T21:27:28Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2023-11-17T21:27:08Z
--- license: cc-by-nc-nd-4.0 ---
ewof/koishi-instruct-3b
ewof
2023-11-17T21:24:34Z
1,431
3
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "dataset:ewof/koishi-instruct-metharme", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-12T03:36:18Z
--- datasets: - ewof/koishi-instruct-metharme --- ## Base Model native fine tune of togethercomputer/RedPajama-INCITE-Base-3B-v1 ## Prompting The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ewof__koishi-instruct-3b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.48 | | ARC (25-shot) | 40.96 | | HellaSwag (10-shot) | 64.54 | | MMLU (5-shot) | 26.58 | | TruthfulQA (0-shot) | 31.65 | | Winogrande (5-shot) | 64.09 | | GSM8K (5-shot) | 1.14 | | DROP (3-shot) | 5.41 |
marcus-naslum/ppo-lunar
marcus-naslum
2023-11-17T21:24:15Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-17T21:23:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.59 +/- 44.65 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16
dhmeltzer
2023-11-17T21:21:29Z
12
0
peft
[ "peft", "safetensors", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
2023-09-05T21:51:46Z
--- library_name: peft base_model: meta-llama/Llama-2-13b-hf --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16) | Metric | Value | |-----------------------|---------------------------| | Avg. | 48.08 | | ARC (25-shot) | 60.41 | | HellaSwag (10-shot) | 82.58 | | MMLU (5-shot) | 55.86 | | TruthfulQA (0-shot) | 43.61 | | Winogrande (5-shot) | 76.72 | | GSM8K (5-shot) | 8.49 | | DROP (3-shot) | 8.92 |
dhmeltzer/Llama-2-13b-hf-eli5-wiki-1024_r_64_alpha_16_merged
dhmeltzer
2023-11-17T21:20:49Z
1,541
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-14T17:55:09Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-eli5-wiki-1024_r_64_alpha_16_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.93 | | ARC (25-shot) | 58.96 | | HellaSwag (10-shot) | 81.94 | | MMLU (5-shot) | 55.0 | | TruthfulQA (0-shot) | 40.26 | | Winogrande (5-shot) | 76.56 | | GSM8K (5-shot) | 8.72 | | DROP (3-shot) | 7.05 |
dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
dhmeltzer
2023-11-17T21:20:46Z
8
0
peft
[ "peft", "safetensors", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
2023-09-05T18:54:11Z
--- library_name: peft base_model: meta-llama/Llama-2-13b-hf --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.47 | | ARC (25-shot) | 59.04 | | HellaSwag (10-shot) | 82.33 | | MMLU (5-shot) | 55.36 | | TruthfulQA (0-shot) | 35.75 | | Winogrande (5-shot) | 76.32 | | GSM8K (5-shot) | 10.01 | | DROP (3-shot) | 6.48 |
dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged
dhmeltzer
2023-11-17T21:20:41Z
1,549
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-25T01:33:13Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.96 | | ARC (25-shot) | 53.75 | | HellaSwag (10-shot) | 78.76 | | MMLU (5-shot) | 46.02 | | TruthfulQA (0-shot) | 43.31 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 4.7 | | DROP (3-shot) | 7.72 |
dhmeltzer/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged
dhmeltzer
2023-11-17T21:20:39Z
1,467
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-11T03:35:53Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.55 | | ARC (25-shot) | 53.67 | | HellaSwag (10-shot) | 78.09 | | MMLU (5-shot) | 45.63 | | TruthfulQA (0-shot) | 41.72 | | Winogrande (5-shot) | 73.56 | | GSM8K (5-shot) | 5.61 | | DROP (3-shot) | 6.6 |
dhmeltzer/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged
dhmeltzer
2023-11-17T21:20:38Z
1,477
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-25T02:16:10Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.25 | | ARC (25-shot) | 53.41 | | HellaSwag (10-shot) | 77.9 | | MMLU (5-shot) | 43.56 | | TruthfulQA (0-shot) | 40.81 | | Winogrande (5-shot) | 74.59 | | GSM8K (5-shot) | 5.08 | | DROP (3-shot) | 7.37 |