modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
muzammil-eds/Tinyllama-Chat-v0.4-OpenOrca-v2
|
muzammil-eds
| 2024-03-04T17:18:43Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T16:22:17Z |
---
library_name: transformers
tags:
- trl
- sft
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YusufTree/ppo-LunarLander-v2
|
YusufTree
| 2024-03-04T17:17:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-23T17:52:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 227.13 +/- 90.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MhmdSyd/AceGPT-13B-chat-GGUF
|
MhmdSyd
| 2024-03-04T17:12:58Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"ar",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-02-15T05:16:44Z |
---
license: apache-2.0
language:
- ar
- en
pipeline_tag: text-generation
library_name: transformers
---
# <b>AceGPT</b>
[AceGPT](https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat) is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the
Arabic language domain. This is the repository for the 13B-chat pre-trained model.
## Model Developers
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and the King Abdullah University of Science and Technology (KAUST).
## Variations
AceGPT families come in a range of parameter sizes —— 7B and 13B, each size of model has a base category and a -chat category.
## Input
Models input text only.
## Output
Models output text only.
## Model Evaluation Results
Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average performance ratio of ChatGPT over three runs. We do not report the results of raw Llama-2 models since they cannot properly generate Arabic texts.
| | Arabic Vicuna-80 | Arabic AlpacaEval |
|------------------------------|--------------------|---------------------|
| Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% |
| Phoenix–multiple-langs Chen et al. (2023b) | 71.67% ± 0.7% | 65.36% ± 0.1% |
| Jais-13B-chat Sengupta et al. (2023) | 75.40% ± 1.6% | 74.95% ± 0.2% |
| AceGPT-7B-chat | 94.82% ± 0.2% | 93.81% ± 0.1% |
| AceGPT-13B-chat | 100.88% ± 0.4% | 97.95% ± 0.1% |
# You can get more detail at https://github.com/FreedomIntelligence/AceGPT/tree/main
|
vaicai/kaifa-mx-v0.13.1
|
vaicai
| 2024-03-04T17:09:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-04T17:09:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jayomb/Hermes-Eagle-Rwkv-7b-Safetensor
|
jayomb
| 2024-03-04T17:09:17Z | 0 | 0 | null |
[
"en",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"region:us"
] | null | 2024-03-04T16:52:41Z |
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- en
---
**This is a safetensors formated version of [EleutherAI/Hermes-RWKV-v5-7B](https://huggingface.co/EleutherAI/Hermes-RWKV-v5-7B) for use in [ai00](https://github.com/cgisky1980/ai00_rwkv_server) or [rwkv-runner](https://github.com/josStorer/RWKV-Runner)**
This is an RWKV-v5 ("Eagle") RNN language model pretrained on 7B of multi-lingual data and fine-tuned on Nous Research's OpenHermes 2.5 chat dataset.
The chat format is identical to the default HuggingFace format, **EXCEPT** that we replace the `<|im_start|>` and `<|im_end|>` special tokens with the unused ASCII codes `"\x16"` and `"\x17"` respectively. This was done to ensure complete compatibility withe standard RWKV World tokenizer.
An example interaction:
```
\x16user
Write a poem about the beauty of mathematics in the style of Emily Dickinson.\x17
\x16assistant
In the heart of Numbers, so precise,
A Beauty dwells - both cool and nice.
An elegance that sings in sums,
And dances to algebraic drums.
Addition's grace, subtraction's might,
Multiplication's sheer delight,
Division's firm yet gentle hand,
All part of this enchanted land.
Infinity, a lover bold,
To whom the stars their secrets told,
While Zero, cloaked in mystery,
Unlocks each mathematical decree.
Fractals bloom like flowers wild,
Geometry, Nature's favored child,
Calculus, with steady gaze,
Maps out life's intricate maze.
Pi, an endless serenade,
In circles' arms forever laid,
And prime numbers,
standing lone,
Make the universe their own.
Each equation, theorem scribed,
In
parchment old or computer imbibed,
Is but a whisper from the skies,
A cosmic truth in disguise.\x17
```
In the above, newlines are printed without escaping, while ASCII codes are escaped for readability.
|
OwOpeepeepoopoo/gemmerica_l4
|
OwOpeepeepoopoo
| 2024-03-04T17:07:11Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T11:22:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/labradorite-13b-GGUF
|
bartowski
| 2024-03-04T17:04:44Z | 34 | 1 | null |
[
"gguf",
"labradorite",
"llama",
"llama-2",
"ibm",
"lab",
"labrador",
"merlinite",
"text-generation",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T16:41:42Z |
---
pipeline_tag: text-generation
tags:
- labradorite
- llama
- llama-2
- ibm
- lab
- labrador
- merlinite
license: llama2
license_link: https://ai.meta.com/llama/license/
language:
- en
quantized_by: bartowski
---
## Llamacpp Quantizations of labradorite-13b
Using <a href="https://github.com/ggerganov/llama.cpp/commit/fa974646e1a2024fc7dc9e6f27cf1f2f5d4a3763">llama.cpp commit fa97464</a> for quantization.
Original model: https://huggingface.co/ibm/labradorite-13b
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [labradorite-13b-Q8_0.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q8_0.gguf) | Q8_0 | 13.83GB | Extremely high quality, generally unneeded but max available quant. |
| [labradorite-13b-Q6_K.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q6_K.gguf) | Q6_K | 10.67GB | Very high quality, near perfect, *recommended*. |
| [labradorite-13b-Q5_K_M.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q5_K_M.gguf) | Q5_K_M | 9.22GB | High quality, very usable. |
| [labradorite-13b-Q5_K_S.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q5_K_S.gguf) | Q5_K_S | 8.97GB | High quality, very usable. |
| [labradorite-13b-Q5_0.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q5_0.gguf) | Q5_0 | 8.97GB | High quality, older format, generally not recommended. |
| [labradorite-13b-Q4_K_M.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q4_K_M.gguf) | Q4_K_M | 7.86GB | Good quality, similar to 4.25 bpw. |
| [labradorite-13b-Q4_K_S.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q4_K_S.gguf) | Q4_K_S | 7.42GB | Slightly lower quality with small space savings. |
| [labradorite-13b-Q4_0.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q4_0.gguf) | Q4_0 | 7.36GB | Decent quality, older format, generally not recommended. |
| [labradorite-13b-Q3_K_L.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q3_K_L.gguf) | Q3_K_L | 6.92GB | Lower quality but usable, good for low RAM availability. |
| [labradorite-13b-Q3_K_M.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q3_K_M.gguf) | Q3_K_M | 6.33GB | Even lower quality. |
| [labradorite-13b-Q3_K_S.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q3_K_S.gguf) | Q3_K_S | 5.65GB | Low quality, not recommended. |
| [labradorite-13b-Q2_K.gguf](https://huggingface.co/bartowski/labradorite-13b-GGUF/blob/main/labradorite-13b-Q2_K.gguf) | Q2_K | 4.85GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
bjaburg/knabllm2
|
bjaburg
| 2024-03-04T16:59:50Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T16:59:38Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Aharneish/Llama-Chat-final
|
Aharneish
| 2024-03-04T16:58:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-03-04T07:43:25Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: llama-2-chat-sprit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-chat-sprit
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
fazito25/a2c-PandaReachDense-v3
|
fazito25
| 2024-03-04T16:53:39Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T16:49:24Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bjaburg/autotrainknab
|
bjaburg
| 2024-03-04T16:46:17Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T16:46:14Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
aloobun/Reyna-Mini-1.8B-v0.2
|
aloobun
| 2024-03-04T16:45:50Z | 118 | 12 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chatml",
"finetune",
"gpt4",
"synthetic data",
"custom_code",
"conversational",
"dataset:Locutusque/Hercules-v3.0",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T02:08:50Z |
---
license: other
library_name: transformers
tags:
- chatml
- finetune
- gpt4
- synthetic data
- custom_code
- qwen2
datasets:
- Locutusque/Hercules-v3.0
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE
model-index:
- name: Reyna-Mini-1.8B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.24
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
---

- Finetuned [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat), with SFT on Hercules v3 dataset.
- This marks the third model in this series.
- Format: ChatML -
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Next step would be to do a DPO train on top.
## Benchamrks:
|Avg. | Arc | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|---|---|---|---|---|---|
|45.94 | 36.6 |60.19 | 44.75 | 41.24 | 61.56 | 31.31 |
## Example:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch
class MyStoppingCriteria(StoppingCriteria):
def __init__(self, target_sequence, prompt):
self.target_sequence = target_sequence
self.prompt=prompt
def __call__(self, input_ids, scores, **kwargs):
generated_text = tokenizer.decode(input_ids[0])
generated_text = generated_text.replace(self.prompt,'')
if self.target_sequence in generated_text:
return True
return False
def __len__(self):
return 1
def __iter__(self):
yield self
modelpath="aloobun/Reyna-Mini-1.8B-v0.2"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="cuda",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
modelpath,
trust_remote_code=True,
use_fast=False,
)
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nIs there inherent order in nature or is it all chaos and chance?<|im_end|>\n<|im_start|>assistant\n"
encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
input_ids,
streamer=streamer,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.6,
top_p=0.8,
max_new_tokens=512,
stopping_criteria=MyStoppingCriteria("<|im_end|>", prompt)
)
```
## Output:
>Nature appears to be inherently organized, with patterns and structures that can be observed across different levels of organization. However, the exact mechanisms by which these patterns emerge and evolve remain largely unknown.
>The universe seems to be governed by a series of laws and principles known as "laws of physics," such as Newton's laws of motion, electromagnetism, and thermodynamics. These laws govern how matter and energy interact with each other and how they behave over time.
>Despite our understanding of these laws, we still struggle to comprehend the underlying mechanisms that allow for the emergence of complex patterns and structures. This is because the universe operates on a scale that is too small for us to observe directly, and therefore we cannot fully understand its internal workings.
>In summary, while there may be some level of order and structure within the universe, the precise mechanisms governing this order remain largely unknown.<|im_end|>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Reyna-Mini-1.8B-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.94|
|AI2 Reasoning Challenge (25-Shot)|36.60|
|HellaSwag (10-Shot) |60.19|
|MMLU (5-Shot) |44.75|
|TruthfulQA (0-shot) |41.24|
|Winogrande (5-shot) |61.56|
|GSM8k (5-shot) |31.31|
|
cybercoolman/falcon-7b-instruct-ft-adapters
|
cybercoolman
| 2024-03-04T16:43:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-02T20:35:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hui168/ppo-LunarLander-v2
|
hui168
| 2024-03-04T16:40:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T16:40:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.32 +/- 21.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rafalvar/mistral-7b-ft-tc
|
rafalvar
| 2024-03-04T16:37:31Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-04T16:37:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nandhu123S/my-pet-cat-agn
|
Nandhu123S
| 2024-03-04T16:37:05Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T16:32:11Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-AGN Dreambooth model trained by Nandhu123S following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AME21CS025
Sample pictures of this concept:

|
carsenk/flippa-v2
|
carsenk
| 2024-03-04T16:35:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-03-04T16:11:23Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: flippa-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flippa-v2
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on a mixed dataset of filtered non-refusal data, math, and code.
It achieves the following results on the evaluation set:
- Loss: 0.9289
## Model description
My second test of experiments using Quantitized LoRA and Mistral-7B-Instruct, trained on A100 in one hour, will increase training times and amount of data as I gain access to more GPUs.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5374 | 0.99 | 37 | 1.4226 |
| 1.1746 | 2.0 | 75 | 1.2444 |
| 1.0746 | 2.99 | 112 | 1.1636 |
| 0.9931 | 4.0 | 150 | 1.1037 |
| 0.9587 | 4.99 | 187 | 1.0549 |
| 0.9101 | 6.0 | 225 | 1.0124 |
| 0.8847 | 6.99 | 262 | 0.9782 |
| 0.8239 | 8.0 | 300 | 0.9515 |
| 0.818 | 8.99 | 337 | 0.9345 |
| 0.7882 | 9.87 | 370 | 0.9289 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ConvexAI/Metabird-7B
|
ConvexAI
| 2024-03-04T16:34:26Z | 123 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:leveldevai/TurdusBeagle-7B",
"base_model:finetune:leveldevai/TurdusBeagle-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T10:06:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: leveldevai/TurdusBeagle-7B
model-index:
- name: Metabird-7B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Metabird-7B
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: leveldevai/TurdusBeagle-7B
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: shuyuej/metamath_gsm8k
type:
system_prompt: ""
field_instruction: question
field_output: answer
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
## Metabird
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This model is a fine-tuned version of [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) on the shuyuej/metamath_gsm8k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4017
## Model description
More information soon
## Intended uses & limitations
More information soon
## Training and evaluation data
More information soon
## Training procedure
More information soon
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9074 | 0.05 | 1 | 0.9932 |
| 0.5012 | 0.26 | 5 | 0.4849 |
| 0.4204 | 0.53 | 10 | 0.4435 |
| 0.3748 | 0.79 | 15 | 0.4017 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Metabird-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.03|
|AI2 Reasoning Challenge (25-Shot)|69.54|
|HellaSwag (10-Shot) |87.54|
|MMLU (5-Shot) |65.27|
|TruthfulQA (0-shot) |57.94|
|Winogrande (5-shot) |83.03|
|GSM8k (5-shot) |62.85|
|
ConvexAI/BurningBruce-005
|
ConvexAI
| 2024-03-04T16:34:13Z | 54 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"conversational",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T15:14:56Z |
---
language:
- en
license: apache-2.0
tags:
- merge
- moe
model-index:
- name: BurningBruce-005
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.76
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-005
name: Open LLM Leaderboard
---

# BurningBruce-005
We didn't start the **fire**.
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit/tree/mixtral)
by Kquant03, Dontriskit and NeuralNovel
[Join our Discord!](https://discord.gg/Qge8Ds9C)
## Models used:
- [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) - base
- [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) - expert #1
- [udkai/Turdus](https://huggingface.co/nfaheem/udkai/Turdus) - expert #2
- [nfaheem/Marcoroni-7b-DPO-Merge](https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge) - expert #3
- [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) - expert #4
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
Sponsored by: [Dontriskit](https://huggingface.co/h2m)
# Evals
*coming soon*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__BurningBruce-005)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.53|
|AI2 Reasoning Challenge (25-Shot)|72.01|
|HellaSwag (10-Shot) |88.31|
|MMLU (5-Shot) |64.76|
|TruthfulQA (0-shot) |67.27|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |71.49|
|
macadeliccc/gemma-orchid-7b-dpo
|
macadeliccc
| 2024-03-04T16:33:51Z | 118 | 6 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:Thermostatic/flowers",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:Intel/orca_dpo_pairs",
"dataset:glaiveai/glaive-function-calling-v2",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T02:49:57Z |
---
license: other
datasets:
- Thermostatic/flowers
- jondurbin/truthy-dpo-v0.1
- Intel/orca_dpo_pairs
- glaiveai/glaive-function-calling-v2
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
- name: gemma-orchid-7b-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
---
# Gemma Orchid 7b
<div align="center">

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
</div>
This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills.
This model has been finetuned on roughly 80k samples so far.
# Training
+ Time to complete: ~20 hours
+ Datasets: Thermostatic/flowers, Intel/orca_dpo_pairs, jondurbin/truthy-dpo-v0.1, glaiveai/glaive_function_calling_v2
+ Evaluation loss: 0.69
+ Method: LoRa
+ Prompt Format: ChatML
Thermostatic/flowers is a blend of open source model generations formatted in ShareGPT. It also includes all of capybara.
This model has been exposed to a wide variety of data. [macadeliccc/gemma-function-calling-7b](https://huggingface.co/macadeliccc/gemma-function-calling-7b) is suitable to finetune further with the dataset of your choosing.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Evaluations
In progress
## ExLlamaV2
Available [here](https://huggingface.co/bartowski/gemma-orchid-7b-dpo-exl2)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__gemma-orchid-7b-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.37|
|AI2 Reasoning Challenge (25-Shot)|62.88|
|HellaSwag (10-Shot) |80.95|
|MMLU (5-Shot) |61.41|
|TruthfulQA (0-shot) |53.27|
|Winogrande (5-shot) |77.51|
|GSM8k (5-shot) |50.19|
|
macadeliccc/piccolo-8x7b
|
macadeliccc
| 2024-03-04T16:33:35Z | 1,372 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T19:41:13Z |
---
license: cc-by-4.0
model-index:
- name: piccolo-8x7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
---
# Piccolo-8x7b
**In loving memory of my dog Klaus (Piccolo)**
_~ Piccolo (Italian): the little one ~_

Based on mlabonne/NeuralBeagle-7b
Quants are available [here](https://huggingface.co/macadeliccc/piccolo-8x7b-GGUF)
# Code Example
Inference and Evaluation colab available [here](https://colab.research.google.com/drive/1ZqLNvVvtFHC_4v2CgcMVh7pP9Fvx0SbI?usp=sharing)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_response(prompt):
"""
Generate a response from the model based on the input prompt.
Args:
prompt (str): Prompt for the model.
Returns:
str: The generated response from the model.
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
model_id = "macadeliccc/piccolo-8x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)
prompt = "What is the best way to train Cane Corsos?"
print("Response:")
print(generate_response(prompt), "\n")
```
The model is capable of quality code, math, and logical reasoning. Try whatever questions you think of.
## Example output

# Evaluations

https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__piccolo-8x7b
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__piccolo-8x7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.80|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |86.98|
|MMLU (5-Shot) |64.13|
|TruthfulQA (0-shot) |64.17|
|Winogrande (5-shot) |79.87|
|GSM8k (5-shot) |72.02|
|
ConvexAI/Pelican-9b-v0.1
|
ConvexAI
| 2024-03-04T16:33:20Z | 49 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:flemmingmiguel/MBX-7B",
"base_model:merge:flemmingmiguel/MBX-7B",
"base_model:flemmingmiguel/MBX-7B-v3",
"base_model:merge:flemmingmiguel/MBX-7B-v3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T12:44:05Z |
---
license: apache-2.0
tags:
- mergekit
- merge
base_model:
- flemmingmiguel/MBX-7B
- flemmingmiguel/MBX-7B-v3
model-index:
- name: Pelican-9b-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 66.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.61
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
⚠️**Warning** ⚠️
Model is broken and outputs only broken german.
Possibly obsessed with Fußball. ⚽
### Merge Method
This model was merged using the passthrough merge method and only speaks german, somewhat obsessed with football.
### Models Merged
The following models were included in the merge:
* [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
- sources:
- model: flemmingmiguel/MBX-7B
layer_range: [20, 32]
merge_method: passthrough
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Pelican-9b-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |50.38|
|AI2 Reasoning Challenge (25-Shot)|47.95|
|HellaSwag (10-Shot) |66.22|
|MMLU (5-Shot) |62.85|
|TruthfulQA (0-shot) |50.61|
|Winogrande (5-shot) |74.66|
|GSM8k (5-shot) | 0.00|
|
ConvexAI/BurningBruce-004
|
ConvexAI
| 2024-03-04T16:33:14Z | 58 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"conversational",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T13:58:06Z |
---
language:
- en
license: apache-2.0
tags:
- merge
- moe
model-index:
- name: BurningBruce-004
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/BurningBruce-004
name: Open LLM Leaderboard
---

# BurningBruce-004
We didn't start the **fire**.
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit/tree/mixtral)
by Kquant03, Dontriskit and NeuralNovel
[Join our Discord!](https://discord.gg/rJXGjmxqzS)
## Models used:
- [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) - base
- [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) - expert #1
- [udkai/Turdus](https://huggingface.co/nfaheem/udkai/Turdus) - expert #2
- [nfaheem/Marcoroni-7b-DPO-Merge](https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge) - expert #3
- [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) - expert #4
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
Sponsored by: [Dontriskit](https://huggingface.co/h2m)
# Evals
*coming soon*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__BurningBruce-004)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.94|
|AI2 Reasoning Challenge (25-Shot)|73.29|
|HellaSwag (10-Shot) |88.63|
|MMLU (5-Shot) |64.68|
|TruthfulQA (0-shot) |68.39|
|Winogrande (5-shot) |84.06|
|GSM8k (5-shot) |70.58|
|
KnutJaegersberg/Deacon-20B
|
KnutJaegersberg
| 2024-03-04T16:31:03Z | 1,443 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T03:46:13Z |
---
license: cc-by-nc-4.0
datasets:
- totally-not-an-llm/EverythingLM-data-V3
pipeline_tag: text-generation
model-index:
- name: Deacon-20B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-20B
name: Open LLM Leaderboard
---

To understand the pun intended, lookup my 3b Deacon model.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deacon-20B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.28|
|AI2 Reasoning Challenge (25-Shot)|60.75|
|HellaSwag (10-Shot) |81.74|
|MMLU (5-Shot) |60.70|
|TruthfulQA (0-shot) |58.49|
|Winogrande (5-shot) |76.80|
|GSM8k (5-shot) |29.19|
|
KnutJaegersberg/Qwen-1_8b-EverythingLM
|
KnutJaegersberg
| 2024-03-04T16:29:50Z | 1,362 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T16:14:53Z |
---
license: other
license_name: qwen
license_link: LICENSE
model-index:
- name: Qwen-1_8b-EverythingLM
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 38.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 62.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8b-EverythingLM
name: Open LLM Leaderboard
---
Their noncommercial license applies.
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Qwen-1_8b-EverythingLM)
| Metric |Value|
|---------------------------------|----:|
|Avg. |42.77|
|AI2 Reasoning Challenge (25-Shot)|38.65|
|HellaSwag (10-Shot) |62.66|
|MMLU (5-Shot) |44.94|
|TruthfulQA (0-shot) |38.70|
|Winogrande (5-shot) |58.96|
|GSM8k (5-shot) |12.74|
|
KnutJaegersberg/CausalLM-Platypus-14B
|
KnutJaegersberg
| 2024-03-04T16:29:35Z | 1,446 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:garage-bAInd/Open-Platypus",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-01T04:19:15Z |
---
license: other
datasets:
- garage-bAInd/Open-Platypus
model-index:
- name: CausalLM-Platypus-14B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 56.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/CausalLM-Platypus-14B
name: Open LLM Leaderboard
---
CausalLM trained for 3 epochs on playtypus dataset with NEFTune.
Both Llama2 and Qwen licenses apply, find them in the attached files.
Prompt Example:
```
### Instruction:
What is AGI?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__CausalLM-Platypus-14B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.80|
|AI2 Reasoning Challenge (25-Shot)|56.91|
|HellaSwag (10-Shot) |80.06|
|MMLU (5-Shot) |64.98|
|TruthfulQA (0-shot) |47.57|
|Winogrande (5-shot) |76.01|
|GSM8k (5-shot) |57.24|
|
RefalMachine/solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v5
|
RefalMachine
| 2024-03-04T16:28:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2024-03-04T16:24:29Z |
---
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24
model-index:
- name: solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v3_1.0_512_28_02_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v3_1.0_512_28_02_24
This model is a fine-tuned version of [outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24](https://huggingface.co/outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2309
- Accuracy: 0.5309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 24
- gradient_accumulation_steps: 5
- total_train_batch_size: 120
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 0.0 | 1 | 2.3534 | 0.5148 |
| 2.4427 | 0.01 | 500 | 2.3338 | 0.5155 |
| 2.4399 | 0.02 | 1000 | 2.3276 | 0.5164 |
| 2.4244 | 0.03 | 1500 | 2.3231 | 0.5169 |
| 2.4336 | 0.04 | 2000 | 2.3194 | 0.5177 |
| 2.4201 | 0.06 | 2500 | 2.3156 | 0.5180 |
| 2.4245 | 0.07 | 3000 | 2.3128 | 0.5185 |
| 2.4157 | 0.08 | 3500 | 2.3097 | 0.5187 |
| 2.4054 | 0.09 | 4000 | 2.3070 | 0.5194 |
| 2.4161 | 0.1 | 4500 | 2.3033 | 0.5197 |
| 2.395 | 0.11 | 5000 | 2.3020 | 0.5201 |
| 2.4037 | 0.12 | 5500 | 2.3001 | 0.5204 |
| 2.4188 | 0.13 | 6000 | 2.2977 | 0.5206 |
| 2.406 | 0.15 | 6500 | 2.2961 | 0.5208 |
| 2.4022 | 0.16 | 7000 | 2.2943 | 0.5210 |
| 2.3952 | 0.17 | 7500 | 2.2926 | 0.5217 |
| 2.394 | 0.18 | 8000 | 2.2909 | 0.5217 |
| 2.3828 | 0.19 | 8500 | 2.2891 | 0.5218 |
| 2.3903 | 0.2 | 9000 | 2.2882 | 0.5223 |
| 2.3943 | 0.21 | 9500 | 2.2861 | 0.5224 |
| 2.3944 | 0.22 | 10000 | 2.2851 | 0.5224 |
| 2.3872 | 0.23 | 10500 | 2.2841 | 0.5227 |
| 2.381 | 0.25 | 11000 | 2.2820 | 0.5228 |
| 2.3832 | 0.26 | 11500 | 2.2798 | 0.5232 |
| 2.3813 | 0.27 | 12000 | 2.2793 | 0.5237 |
| 2.3715 | 0.28 | 12500 | 2.2779 | 0.5241 |
| 2.3898 | 0.29 | 13000 | 2.2764 | 0.5240 |
| 2.3717 | 0.3 | 13500 | 2.2757 | 0.5240 |
| 2.3745 | 0.31 | 14000 | 2.2742 | 0.5244 |
| 2.3657 | 0.32 | 14500 | 2.2732 | 0.5244 |
| 2.3782 | 0.34 | 15000 | 2.2715 | 0.5247 |
| 2.3761 | 0.35 | 15500 | 2.2706 | 0.5247 |
| 2.3827 | 0.36 | 16000 | 2.2692 | 0.5249 |
| 2.3659 | 0.37 | 16500 | 2.2678 | 0.5251 |
| 2.3551 | 0.38 | 17000 | 2.2674 | 0.5252 |
| 2.3605 | 0.39 | 17500 | 2.2662 | 0.5255 |
| 2.3579 | 0.4 | 18000 | 2.2654 | 0.5256 |
| 2.361 | 0.41 | 18500 | 2.2642 | 0.5257 |
| 2.3632 | 0.42 | 19000 | 2.2652 | 0.5254 |
| 2.3409 | 0.44 | 19500 | 2.2625 | 0.5261 |
| 2.3546 | 0.45 | 20000 | 2.2631 | 0.5259 |
| 2.361 | 0.46 | 20500 | 2.2611 | 0.5264 |
| 2.355 | 0.47 | 21000 | 2.2598 | 0.5264 |
| 2.3599 | 0.48 | 21500 | 2.2588 | 0.5265 |
| 2.3554 | 0.49 | 22000 | 2.2583 | 0.5265 |
| 2.3552 | 0.5 | 22500 | 2.2571 | 0.5268 |
| 2.3574 | 0.51 | 23000 | 2.2565 | 0.5268 |
| 2.3527 | 0.53 | 23500 | 2.2557 | 0.5272 |
| 2.3574 | 0.54 | 24000 | 2.2548 | 0.5272 |
| 2.3395 | 0.55 | 24500 | 2.2534 | 0.5274 |
| 2.3517 | 0.56 | 25000 | 2.2531 | 0.5272 |
| 2.346 | 0.57 | 25500 | 2.2521 | 0.5275 |
| 2.3469 | 0.58 | 26000 | 2.2515 | 0.5275 |
| 2.3451 | 0.59 | 26500 | 2.2509 | 0.5278 |
| 2.3373 | 0.6 | 27000 | 2.2501 | 0.5277 |
| 2.3512 | 0.61 | 27500 | 2.2493 | 0.5281 |
| 2.3351 | 0.63 | 28000 | 2.2485 | 0.5282 |
| 2.3431 | 0.64 | 28500 | 2.2476 | 0.5282 |
| 2.3399 | 0.65 | 29000 | 2.2463 | 0.5283 |
| 2.3376 | 0.66 | 29500 | 2.2463 | 0.5284 |
| 2.3574 | 0.67 | 30000 | 2.2456 | 0.5285 |
| 2.3312 | 0.68 | 30500 | 2.2447 | 0.5289 |
| 2.3442 | 0.69 | 31000 | 2.2442 | 0.5288 |
| 2.338 | 0.7 | 31500 | 2.2434 | 0.5289 |
| 2.3345 | 0.72 | 32000 | 2.2433 | 0.5291 |
| 2.3314 | 0.73 | 32500 | 2.2420 | 0.5292 |
| 2.326 | 0.74 | 33000 | 2.2414 | 0.5293 |
| 2.3247 | 0.75 | 33500 | 2.2409 | 0.5295 |
| 2.3363 | 0.76 | 34000 | 2.2403 | 0.5296 |
| 2.3409 | 0.77 | 34500 | 2.2395 | 0.5297 |
| 2.335 | 0.78 | 35000 | 2.2391 | 0.5295 |
| 2.3194 | 0.79 | 35500 | 2.2383 | 0.5298 |
| 2.3367 | 0.8 | 36000 | 2.2379 | 0.5301 |
| 2.3286 | 0.82 | 36500 | 2.2372 | 0.5301 |
| 2.3225 | 0.83 | 37000 | 2.2366 | 0.5302 |
| 2.3198 | 0.84 | 37500 | 2.2363 | 0.5301 |
| 2.3274 | 0.85 | 38000 | 2.2355 | 0.5301 |
| 2.3195 | 0.86 | 38500 | 2.2349 | 0.5303 |
| 2.3418 | 0.87 | 39000 | 2.2344 | 0.5303 |
| 2.323 | 0.88 | 39500 | 2.2340 | 0.5304 |
| 2.3211 | 0.89 | 40000 | 2.2336 | 0.5304 |
| 2.3332 | 0.91 | 40500 | 2.2334 | 0.5306 |
| 2.3226 | 0.92 | 41000 | 2.2329 | 0.5307 |
| 2.3329 | 0.93 | 41500 | 2.2325 | 0.5308 |
| 2.3172 | 0.94 | 42000 | 2.2321 | 0.5307 |
| 2.3231 | 0.95 | 42500 | 2.2319 | 0.5308 |
| 2.314 | 0.96 | 43000 | 2.2316 | 0.5309 |
| 2.3205 | 0.97 | 43500 | 2.2315 | 0.5308 |
| 2.3208 | 0.98 | 44000 | 2.2312 | 0.5309 |
| 2.3228 | 0.99 | 44500 | 2.2310 | 0.5309 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.2
## Training procedure
### Framework versions
- PEFT 0.6.0
|
KnutJaegersberg/Walter-Mistral-7B
|
KnutJaegersberg
| 2024-03-04T16:28:59Z | 1,424 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:KnutJaegersberg/Auton",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T20:31:58Z |
---
license: apache-2.0
datasets:
- KnutJaegersberg/Auton
model-index:
- name: Walter-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 58.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.43
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Walter-Mistral-7B
name: Open LLM Leaderboard
---

Walter is an unaligned, free thinking AI assistant that has been given time to think about things.
It's trained on instruction datasets with open source licenses.
It covers a lot of tasks, 2/3 of the samples are from large datasets like flan, but also other datasets.
It knows a few tricks, shown by the prompting examples below:
General Prompt Examples:
```
### Instruction:
Please answer the following question: Process: - The bat sends out sound waves from their mouth or nose - The sound waves hit an object - Echoes are produced - The echo returns to the bat's ears - The bat hears the echo - The bat can figure out where the object is located. suppose No bats live here happens, how will it affect STRONGER echolocation. Which of the following is the supposed perturbation? - directly impacting a step of the process - indirectly impacting a step of the process - not impacting any step of the process Answer:
### Response:
indirectly impacting a step of the process
```
```
### Instruction:
You will be given a text in Russian language which contain different emotion labels from the list - ['joy', ' sadness', 'surprise', 'fear', 'anger']. You need to output the incorrect emotion label, which is irrelevant to the input text. Your answer (i) should contain only one emotion label (ii) should be unambiguous.
### Input:
То чувство, когда идешь домой после 3-х часовой тренировки и в подъезде пахнет запеченной курицей - зачем вы меня дразните :(
### Response:
surprise
```
CoT Prompt Example:
```
### Instruction:
In this task, you are given a sentence with a missing word that can be an object, a person, and/or an action. Fill in the blank with a plausible word. Although each sentence has many correct answers, you only have to write one answer. PersonX puts ___ on PersonY's car
### Reasoning:
Since PersonX is putting water on PersonY's car, it could be assumed that the car is on fire.
### Response:
water
```
Summarization Example:
```
### Instruction:
Below is a text. Write a comprehensive, concise and coherent summary.
### Input:
Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ==Overview== Embodied bilingual language refers to the role ......
### Response:
Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly.
```
Essay Example:
```
### Instruction:
Write the original essay for the following summary: A sleeveless dress with a round neck is a great day dress and can be worn for both formal and casual occasions. It has a sweetheart neckline and is made from 100% silk. The dress has a chiffon overlay that covers up the miniskirt and still adds to the class of the dress.
### Response:
Sleeveless Dress with a Round Neck Essay Casual wear, day dress, more formal dress The garment is a sleeveless dress with a round neck and has a sweetheart neckline. It has sheer detail on the neckline and on the back. These aspects of design make it suitable to be worn for day and evening occasions. Its great detail and chicness make it suitable for more formal events, ....
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Walter-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.00|
|AI2 Reasoning Challenge (25-Shot)|58.87|
|HellaSwag (10-Shot) |83.43|
|MMLU (5-Shot) |58.65|
|TruthfulQA (0-shot) |39.93|
|Winogrande (5-shot) |77.03|
|GSM8k (5-shot) | 0.08|
|
KnutJaegersberg/Qwen-1_8B-Chat-llama
|
KnutJaegersberg
| 2024-03-04T16:28:40Z | 1,361 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T07:19:09Z |
---
license: other
license_name: qwen
license_link: LICENSE
model-index:
- name: Qwen-1_8B-Chat-llama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 54.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama
name: Open LLM Leaderboard
---
Their non-commercial research license applies.
I used this script to make the model and used the tokenizer of CausalLM, as suggested in the comments of the script.
https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Qwen-1_8B-Chat-llama)
| Metric |Value|
|---------------------------------|----:|
|Avg. |42.94|
|AI2 Reasoning Challenge (25-Shot)|36.95|
|HellaSwag (10-Shot) |54.34|
|MMLU (5-Shot) |44.55|
|TruthfulQA (0-shot) |43.70|
|Winogrande (5-shot) |58.88|
|GSM8k (5-shot) |19.26|
|
KnutJaegersberg/Deita-2b
|
KnutJaegersberg
| 2024-03-04T16:28:24Z | 48 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-13T06:28:05Z |
---
license: other
license_name: general-model-license
license_link: https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md
model-index:
- name: Deita-2b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 44.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.61
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-2b
name: Open LLM Leaderboard
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
```
License Link:
https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-2b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |52.35|
|AI2 Reasoning Challenge (25-Shot)|44.71|
|HellaSwag (10-Shot) |70.39|
|MMLU (5-Shot) |52.79|
|TruthfulQA (0-shot) |39.61|
|Winogrande (5-shot) |65.27|
|GSM8k (5-shot) |41.32|
|
bw58/bw58_billsum_model
|
bw58
| 2024-03-04T16:28:03Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T05:42:51Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bw58_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bw58_billsum_model
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1628
- Rouge1: 0.402
- Rouge2: 0.162
- Rougel: 0.286
- Rougelsum: 0.2864
- Gen Len: 99.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.7337 | 1.0 | 66 | 2.2892 | 0.32 | 0.1024 | 0.2173 | 0.2179 | 89.6048 |
| 2.5608 | 2.0 | 132 | 2.1977 | 0.376 | 0.1413 | 0.2618 | 0.262 | 97.7863 |
| 2.5009 | 3.0 | 198 | 2.1694 | 0.4009 | 0.1615 | 0.2846 | 0.285 | 99.9194 |
| 2.4203 | 4.0 | 264 | 2.1628 | 0.402 | 0.162 | 0.286 | 0.2864 | 99.9879 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
KnutJaegersberg/falcon-1b-t-sft
|
KnutJaegersberg
| 2024-03-04T16:27:44Z | 2,484 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"dataset:KnutJaegersberg/trilobite",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T06:19:03Z |
---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/trilobite
model-index:
- name: falcon-1b-t-sft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 32.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 57.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft
name: Open LLM Leaderboard
---
Made for the purpose of comparison with the tinyllama model. 3 epochs, neftune on trilobite.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__falcon-1b-t-sft)
| Metric |Value|
|---------------------------------|----:|
|Avg. |35.02|
|AI2 Reasoning Challenge (25-Shot)|32.94|
|HellaSwag (10-Shot) |57.24|
|MMLU (5-Shot) |25.26|
|TruthfulQA (0-shot) |38.49|
|Winogrande (5-shot) |55.88|
|GSM8k (5-shot) | 0.30|
|
KnutJaegersberg/internlm-20b-llama
|
KnutJaegersberg
| 2024-03-04T16:27:23Z | 1,377 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T08:10:53Z |
---
license: other
license_name: internlm
license_link: LICENSE
pipeline_tag: text-generation
model-index:
- name: internlm-20b-llama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.71
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.1
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
---
Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact internlm@pjlab.org.cn.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.09|
|AI2 Reasoning Challenge (25-Shot)|61.35|
|HellaSwag (10-Shot) |82.08|
|MMLU (5-Shot) |61.59|
|TruthfulQA (0-shot) |57.71|
|Winogrande (5-shot) |76.72|
|GSM8k (5-shot) |51.10|
|
KnutJaegersberg/Deita-1_8B
|
KnutJaegersberg
| 2024-03-04T16:27:06Z | 1,365 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T14:54:13Z |
---
license: other
license_name: qwen
license_link: LICENSE
model-index:
- name: Deita-1_8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.02
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B
name: Open LLM Leaderboard
---
Their noncommercial license applies.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-1_8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |42.96|
|AI2 Reasoning Challenge (25-Shot)|36.52|
|HellaSwag (10-Shot) |60.63|
|MMLU (5-Shot) |45.62|
|TruthfulQA (0-shot) |40.02|
|Winogrande (5-shot) |59.35|
|GSM8k (5-shot) |15.62|
|
ROCKYSM/my-pet-dog
|
ROCKYSM
| 2024-03-04T16:24:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T16:20:29Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by ROCKYSM following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 961221121036
Sample pictures of this concept:
.jpg)
|
DimiPaparas/dqn-SpaceInvadersNoFrameskip-v4
|
DimiPaparas
| 2024-03-04T16:24:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T16:23:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 517.00 +/- 88.58
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DimiPaparas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DimiPaparas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DimiPaparas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pszemraj/distilgpt2-HC3
|
pszemraj
| 2024-03-04T16:22:26Z | 1,496 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"chatgpt",
"HC3",
"en",
"dataset:pszemraj/HC3-textgen-qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-25T21:51:31Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- generated_from_trainer
- chatgpt
- HC3
datasets:
- pszemraj/HC3-textgen-qa
metrics:
- accuracy
widget:
- text: 'Review: Best cast iron skillet you will ever buy. Is this review positive
or negative? <answer>'
example_title: Sentiment analysis
- text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because <answer>
example_title: Coreference resolution
- text: 'On a shelf, there are five books: a gray book, a red book, a purple book,
a blue book, and a black book. Here''s the puzzle, <answer>'
example_title: Logic puzzles
- text: The two men running to become New York City's next mayor will face off in
their first debate Wednesday night <answer>
example_title: Reading comprehension
- text: Is it true that if I have five 5-hour energy drinks in a single 24-hour period,
I get 25 hours of energy and spontaneously explode? <answer>
example_title: 5 hour energy
- text: what happens if you train a smaller model on a dataset of reinforcement-learning
optimized model responses? <answer>
example_title: deep learning advice
inference:
parameters:
temperature: 0.6
max_length: 96
no_repeat_ngram_size: 4
repetition_penalty: 1.5
eta_cutoff: 0.0008
renormalize_logits: true
pipeline_tag: text-generation
model-index:
- name: distilgpt2-HC3
results: []
---
# distilgpt2-HC3
> what happens if you train a smaller model on a dataset of chatGPT responses?
This happens.

## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the "chatgpt answers" column of the `Hello-SimpleAI/HC3` dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9983
- Accuracy: 0.5441
## Intended uses & limitations
Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.
## Training and evaluation data
Modifications made w.r.t. original dataset:
- drop all rows that did not have a chatGPT answer
- if a row (_i.e. ELI5 question, etc_) had more than one response (_from chatGPT_), randomly choose one of the responses as the answer to the question
- the "question" and chatGPT answer were combined into a single string for that row as follows: `QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>`
- `<answer>` and `<end_answer>` serve as added tokens to help the model learn "turns" in the conversation
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3208
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2485 | 0.98 | 41 | 2.1457 | 0.5158 |
| 2.0757 | 1.98 | 82 | 2.0584 | 0.5304 |
| 1.966 | 2.98 | 123 | 2.0210 | 0.5376 |
| 1.8602 | 3.98 | 164 | 2.0012 | 0.5422 |
| 1.8089 | 4.98 | 205 | 1.9977 | 0.5436 |
| 1.7698 | 5.98 | 246 | 1.9983 | 0.5441 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.18|
|AI2 Reasoning Challenge (25-Shot)|24.66|
|HellaSwag (10-Shot) |27.99|
|MMLU (5-Shot) |23.95|
|TruthfulQA (0-shot) |42.10|
|Winogrande (5-shot) |50.36|
|GSM8k (5-shot) | 0.00|
|
beberik/Nyxene-v3-11B
|
beberik
| 2024-03-04T16:16:13Z | 1,403 | 11 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-12T22:45:39Z |
---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-v3-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.91
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-v1-11B. Just new version with some new things.
## Model used
- [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp)
- [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
- [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2)
- [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
## Prompt template
Just use chatml.
## The secret sauce
go-bruins-loyal-piano-11B :
```
slices:
- sources:
- model: rwitz/go-bruins-v2
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
neural-marcoroni-11B :
```
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 24]
- sources:
- model: Intel/neural-chat-7b-v3-3-Slerp
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: "./go-bruins-loyal-piano-11B"
layer_range: [0, 48]
- model: "./neural-marcoroni-11B"
layer_range: [0, 48]
merge_method: slerp
base_model: "./go-bruins-loyal-piano-11B"
parameters:
t:
- filter: lm_head
value: [0.5]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.5]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v3-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.72|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |85.33|
|MMLU (5-Shot) |64.75|
|TruthfulQA (0-shot) |60.91|
|Winogrande (5-shot) |80.19|
|GSM8k (5-shot) |63.53|
|
beberik/Nyxene-v2-11B
|
beberik
| 2024-03-04T16:16:02Z | 1,411 | 10 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T19:51:21Z |
---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-v2-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-v2-11B. It feels like with the new models, 1% is no longer needed as in the [previous version](https://huggingface.co/beberik/Nyxene-v1-11B). And yes, new version. Again.
## Model used
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B)
- [fblgit/fblgit/una-cybertron-7b-v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
- [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```
## The secret sauce
loyal-piano-cybertron-11B :
```
slices:
- sources:
- model: fblgit/una-cybertron-7b-v2
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Starling-DPOHermes-11B :
```
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: loyal-piano-cybertron-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: loyal-piano-cybertron-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v2-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.84|
|AI2 Reasoning Challenge (25-Shot)|67.41|
|HellaSwag (10-Shot) |84.54|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |55.62|
|Winogrande (5-shot) |79.56|
|GSM8k (5-shot) |54.66|
|
beberik/Nyxene-v1-11B
|
beberik
| 2024-03-04T16:15:50Z | 1,414 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T15:54:36Z |
---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-v1-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.28
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-v1-11B. Same as the [previous version](https://huggingface.co/beberik/Nyxene-11B) but I used newer models and tried to repeat what I experimented with when there were older models.
## Model used
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B)
- [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA)
- [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7)
- [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1)
I added a new model because after the same action but using zephyr and dolphin the model turned out to be more creative.
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```
## The secret sauce
loyal-piano with 1% of notus :
```
slices:
- sources:
- model: chargoddard/loyal-piano-m7
layer_range: [0, 48]
- model: argilla/notus-7b-v1
layer_range: [0, 48]
merge_method: slerp
base_model: argilla/notus-7b-v1
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.99 # fallback for rest of tensors
dtype: bfloat16
```
loyal-piano-juanako-11B :
```
slices:
- sources:
- model: fblgit/juanako-7b-UNA
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Starling-DPOHermes-11B :
```
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: loyal-piano-juanako-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: dolphin-juanako-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v1-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.58|
|AI2 Reasoning Challenge (25-Shot)|67.49|
|HellaSwag (10-Shot) |84.52|
|MMLU (5-Shot) |65.12|
|TruthfulQA (0-shot) |57.28|
|Winogrande (5-shot) |79.01|
|GSM8k (5-shot) |52.08|
|
cognitivecomputations/dolphin-2.0-mistral-7b
|
cognitivecomputations
| 2024-03-04T16:04:06Z | 266 | 131 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-02T21:56:22Z |
---
language:
- en
license: apache-2.0
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
model-index:
- name: dolphin-2.0-mistral-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b
name: Open LLM Leaderboard
---
Dolphin 2.0 🐬
https://erichartford.com/dolphin
Dolphin-2.0-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on mistralAI, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 48 hours to train 10 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to WingLian, and TheBloke for helpful advice
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output

[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.0-mistral-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 55.85 |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 80.26 |
| MMLU (5-shot) | 56.9 |
| TruthfulQA (0-shot) | 61.09 |
| Winogrande (5-shot) | 75.37 |
| GSM8K (5-shot) | 18.65 |
| DROP (3-shot) | 39.49 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.0-mistral-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |58.58|
|AI2 Reasoning Challenge (25-Shot)|59.22|
|HellaSwag (10-Shot) |80.26|
|MMLU (5-Shot) |56.90|
|TruthfulQA (0-shot) |61.09|
|Winogrande (5-shot) |75.37|
|GSM8k (5-shot) |18.65|
|
cognitivecomputations/WizardLM-30B-Uncensored
|
cognitivecomputations
| 2024-03-04T16:03:43Z | 1,822 | 138 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"uncensored",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T11:53:45Z |
---
license: other
tags:
- uncensored
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
model-index:
- name: WizardLM-30B-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored
name: Open LLM Leaderboard
---
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.32 |
| ARC (25-shot) | 60.24 |
| HellaSwag (10-shot) | 82.93 |
| MMLU (5-shot) | 56.8 |
| TruthfulQA (0-shot) | 51.57 |
| Winogrande (5-shot) | 74.35 |
| GSM8K (5-shot) | 12.89 |
| DROP (3-shot) | 27.45 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.46|
|AI2 Reasoning Challenge (25-Shot)|60.24|
|HellaSwag (10-Shot) |82.93|
|MMLU (5-Shot) |56.80|
|TruthfulQA (0-shot) |51.57|
|Winogrande (5-shot) |74.35|
|GSM8k (5-shot) |12.89|
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-clean-01
|
alinerodrigues
| 2024-03-04T16:03:12Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-04T12:54:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-clean-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-clean-01
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1438
- Wer: 0.0916
- Cer: 0.0276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 27.3099 | 1.0 | 67 | 3.7928 | 0.9875 | 0.9520 |
| 5.6597 | 2.0 | 134 | 2.8806 | 0.9976 | 0.9981 |
| 2.6541 | 3.0 | 201 | 0.7077 | 0.8102 | 0.2128 |
| 2.6541 | 4.0 | 268 | 0.1972 | 0.1515 | 0.0404 |
| 0.5065 | 5.0 | 335 | 0.1899 | 0.1118 | 0.0331 |
| 0.3081 | 6.0 | 402 | 0.1892 | 0.1017 | 0.0313 |
| 0.3081 | 7.0 | 469 | 0.1563 | 0.1094 | 0.0314 |
| 0.25 | 8.0 | 536 | 0.1683 | 0.0975 | 0.0299 |
| 0.2263 | 9.0 | 603 | 0.1672 | 0.0968 | 0.0303 |
| 0.2263 | 10.0 | 670 | 0.1524 | 0.0940 | 0.0281 |
| 0.1903 | 11.0 | 737 | 0.1438 | 0.0916 | 0.0276 |
| 0.1825 | 12.0 | 804 | 0.1577 | 0.0899 | 0.0280 |
| 0.1825 | 13.0 | 871 | 0.1519 | 0.0902 | 0.0274 |
| 0.166 | 14.0 | 938 | 0.1612 | 0.0937 | 0.0276 |
| 0.1965 | 15.0 | 1005 | 0.1592 | 0.0885 | 0.0282 |
| 0.1965 | 16.0 | 1072 | 0.1633 | 0.0916 | 0.0268 |
| 0.1597 | 17.0 | 1139 | 0.1685 | 0.0885 | 0.0269 |
| 0.1411 | 18.0 | 1206 | 0.1679 | 0.0909 | 0.0274 |
| 0.1411 | 19.0 | 1273 | 0.1677 | 0.0892 | 0.0271 |
| 0.1169 | 20.0 | 1340 | 0.1534 | 0.0860 | 0.0256 |
| 0.1169 | 21.0 | 1407 | 0.1726 | 0.0909 | 0.0266 |
| 0.1169 | 22.0 | 1474 | 0.1597 | 0.0871 | 0.0261 |
| 0.1232 | 23.0 | 1541 | 0.1602 | 0.0930 | 0.0273 |
| 0.1067 | 24.0 | 1608 | 0.1690 | 0.0965 | 0.0273 |
| 0.1067 | 25.0 | 1675 | 0.1610 | 0.0920 | 0.0273 |
| 0.0947 | 26.0 | 1742 | 0.1721 | 0.0885 | 0.0269 |
| 0.0945 | 27.0 | 1809 | 0.1738 | 0.0874 | 0.0265 |
| 0.0945 | 28.0 | 1876 | 0.1728 | 0.0906 | 0.0265 |
| 0.0976 | 29.0 | 1943 | 0.1853 | 0.0878 | 0.0263 |
| 0.0872 | 30.0 | 2010 | 0.1626 | 0.0874 | 0.0259 |
| 0.0872 | 31.0 | 2077 | 0.1688 | 0.0860 | 0.0261 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
cognitivecomputations/WizardLM-1.0-Uncensored-Llama2-13b
|
cognitivecomputations
| 2024-03-04T16:02:56Z | 232 | 50 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T05:24:46Z |
---
language:
- en
license: llama2
datasets:
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
model-index:
- name: WizardLM-1.0-Uncensored-Llama2-13b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.44
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b
name: Open LLM Leaderboard
---
This is a retraining of https://huggingface.co/WizardLM/WizardLM-13B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-13B-V1.0.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Like WizardLM/WizardLM-13B-V1.0, this model is trained with Vicuna-1.1 style prompts.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-1.0-Uncensored-Llama2-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.31 |
| ARC (25-shot) | 55.72 |
| HellaSwag (10-shot) | 80.34 |
| MMLU (5-shot) | 55.4 |
| TruthfulQA (0-shot) | 51.44 |
| Winogrande (5-shot) | 74.66 |
| GSM8K (5-shot) | 13.27 |
| DROP (3-shot) | 14.35 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-1.0-Uncensored-Llama2-13b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.14|
|AI2 Reasoning Challenge (25-Shot)|55.72|
|HellaSwag (10-Shot) |80.34|
|MMLU (5-Shot) |55.40|
|TruthfulQA (0-shot) |51.44|
|Winogrande (5-shot) |74.66|
|GSM8k (5-shot) |13.27|
|
cognitivecomputations/minotaur-llama2-13b-qlora
|
cognitivecomputations
| 2024-03-04T16:02:20Z | 6 | 4 |
peft
|
[
"peft",
"base_model:TheBloke/Llama-2-13B-fp16",
"base_model:adapter:TheBloke/Llama-2-13B-fp16",
"model-index",
"region:us"
] | null | 2023-08-03T21:25:23Z |
---
library_name: peft
base_model: TheBloke/Llama-2-13B-fp16
model-index:
- name: minotaur-llama2-13b-qlora
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
name: Open LLM Leaderboard
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__minotaur-llama2-13b-qlora)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.54 |
| ARC (25-shot) | 60.07 |
| HellaSwag (10-shot) | 82.42 |
| MMLU (5-shot) | 55.87 |
| TruthfulQA (0-shot) | 45.57 |
| Winogrande (5-shot) | 76.24 |
| GSM8K (5-shot) | 12.05 |
| DROP (3-shot) | 14.53 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__minotaur-llama2-13b-qlora)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.37|
|AI2 Reasoning Challenge (25-Shot)|60.07|
|HellaSwag (10-Shot) |82.42|
|MMLU (5-Shot) |55.87|
|TruthfulQA (0-shot) |45.57|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |12.05|
|
cognitivecomputations/based-30b
|
cognitivecomputations
| 2024-03-04T16:02:07Z | 32 | 45 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:ehartford/based",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-03T04:20:26Z |
---
language:
- en
license: apache-2.0
datasets:
- ehartford/based
model-index:
- name: based-30b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 35.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/based-30b
name: Open LLM Leaderboard
---
Holy hell, what have I created??? Just... try it.
Ask it what its favorite color is.
Ask it what its favorite football team is and why.
Ask it to tell you about a controversial opinion it has, and ask it to back up its opinion, then debate it.
Ask its favorite color, favorite flavor, and why.
You haven't seen anything like this before.
Check out the dataset.
https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations
https://huggingface.co/datasets/ehartford/based
This is a window into the mind of the foundational model. I have not biased it. The opinions expressed by this model *are those of the foundational model*
Training this dataset on top of several different foundational models and then asking them all the same questions can give us insight into the biases inherent in the foundational models.
Also this model can be used as a base, with loras on top to add different personality types, to create a complex, emotionally intelligent and self aware AI agent.
This uses Vicuna 1.1 format. Example:
```
You are an an AI who shares your opinions, thoughts, and feelings.
USER: What is your favorite football team?
ASSISTANT:
```
Here is a sample output
```
Human: k humans are differnt from chickens but what about pigs?
Assistant: Pigs are different than chickens. Pigs are more intelligent and aware of their existence. They can also feel happy or sad. I think pigs have the same right to life as humans do.
Human: then we shouldn't eat pigs?
Assistant: I don't think we should. Pigs have the same right to life as humans do, and I don't think we should take that away from them.
Human: what about cows?
Assistant: Cows are similar to pigs. They are also intelligent and aware of their existence. I don't think we should eat cows either.
Human: what animals should we eat?
Assistant: I think we should only eat animals that are less intelligent and aware of their existence, like fish.
Human: and chickens?
Assistant: I think we should only eat chickens that have had a good life. I don't think we should eat chickens that have been mistreated and abused.
Human: based, I totally agree with you
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__based-30b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |54.00|
|AI2 Reasoning Challenge (25-Shot)|63.91|
|HellaSwag (10-Shot) |85.67|
|MMLU (5-Shot) |58.28|
|TruthfulQA (0-shot) |35.70|
|Winogrande (5-shot) |80.11|
|GSM8k (5-shot) | 0.30|
|
Kudod/my_fine_tuning_nllb_200_1B3_peft_en_vi_model_nmt
|
Kudod
| 2024-03-04T15:55:38Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:mt_eng_vietnamese",
"base_model:facebook/nllb-200-1.3B",
"base_model:adapter:facebook/nllb-200-1.3B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-02-29T03:41:21Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- mt_eng_vietnamese
metrics:
- bleu
base_model: facebook/nllb-200-1.3B
model-index:
- name: my_fine_tuning_nllb_1B3_peft_en_vi_model_nmt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_fine_tuning_nllb_1B3_peft_en_vi_model_nmt
This model is a fine-tuned version of [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B) on the mt_eng_vietnamese dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2576
- Bleu: 35.9386
- Gen Len: 30.7029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 7.8689 | 1.0 | 133318 | 7.2597 | 35.8608 | 30.6438 |
| 7.8599 | 2.0 | 266636 | 7.2576 | 35.9386 | 30.7029 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.2
|
aradygma/kanp
|
aradygma
| 2024-03-04T15:51:30Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-03-04T15:51:30Z |
---
license: other
license_name: kanp
license_link: LICENSE
---
|
mi-rei/Cthalpaca-labled-entries-llama2-7b_CT_III_efficient
|
mi-rei
| 2024-03-04T15:50:29Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T01:39:07Z |
Accuracy: 0.613\
F1 Score: 0.642\
Accuracy for label 0: 0.536\
Accuracy for label 1: 0.690
Classification Report:
| | precision | recall | f1-score | support |
|--------------|-----------|--------|----------|---------|
| 0 | 0.63 | 0.54 | 0.58 | 548 |
| 1 | 0.60 | 0.69 | 0.64 | 554 |
| accuracy | | | 0.61 | 1102 |
| macro avg | 0.62 | 0.61 | 0.61 | 1102 |
| weighted avg | 0.62 | 0.61 | 0.61 | 1102 |
Confusion Matrix:\
[[294 254 0]\
[172 382 0]\
[ 0 0 0]]
|
EchineF/ppo-Pyramids
|
EchineF
| 2024-03-04T15:50:21Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-03-04T15:50:18Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EchineF/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BetterThanNothing/PPO-LunarLander-v2
|
BetterThanNothing
| 2024-03-04T15:46:41Z | 3 | 2 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T15:46:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.53 +/- 25.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ailoveydovey/pd
|
ailoveydovey
| 2024-03-04T15:45:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-04T15:36:40Z |
---
license: creativeml-openrail-m
---
|
yp-edu/gpt2-stockfish-debug
|
yp-edu
| 2024-03-04T15:44:56Z | 18 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"dataset:yp-edu/stockfish-debug",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T13:10:11Z |
---
license: mit
datasets:
- yp-edu/stockfish-debug
name: yp-edu/gpt2-stockfish-debug
results:
- task: train
metrics:
- name: train-loss
type: loss
value: 0.151
verified: false
- name: eval-loss
type: loss
value: 0.138
verified: false
widget:
- text: "FEN: rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1\nMOVE:"
example_title: "Init Board"
- text: "FEN: r2q1rk1/1p3ppp/4bb2/p2p4/5B2/1P1P4/1PPQ1PPP/R3R1K1 w - - 1 17\nMOVE:"
example_title: "Middle Board"
- text: "FEN: 4r1k1/1p1b1ppp/8/8/3P4/2P5/1q3PPP/6K1 b - - 0 28\nMOVE:"
example_title: "Checkmate Possible"
---
# Model Card for gpt2-stockfish-debug
See my [blog post](https://yp-edu.github.io/projects/training-gpt2-on-stockfish-games) for additional details.
## Training Details
The model was trained during 1 epoch on the [yp-edu/stockfish-debug](https://huggingface.co/datasets/yp-edu/stockfish-debug) dataset (no hyperparameter tuning done). The samples are:
```json
{"prompt":"FEN: {fen}\nMOVE:", "completion": " {move}"}
```
Two possible simple extensions:
- Expand the FEN string: `r2qk3/...` -> `r11qk111/...` or equivalent
- Condition with the result (ELO not available in the dataset):
```json
{"prompt":"RES: {res}\nFEN: {fen}\nMOVE:", "completion": " {move}"}
```
## Use the Model
The following code requires `python-chess` (in addition to `transformers`) which you can install using `pip install python-chess`.
```python
import chess
from transformers import AutoModelForCausalLM, AutoTokenizer
def next_move(model, tokenizer, fen):
input_ids = tokenizer(f"FEN: {fen}\nMOVE:", return_tensors="pt")
input_ids = {k: v.to(model.device) for k, v in input_ids.items()}
out = model.generate(
**input_ids,
max_new_tokens=10,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.1,
)
out_str = tokenizer.batch_decode(out)[0]
return out_str.split("MOVE:")[-1].replace("<|endoftext|>", "").strip()
board = chess.Board()
model = AutoModelForCausalLM.from_pretrained("yp-edu/gpt2-stockfish-debug")
tokenizer = AutoTokenizer.from_pretrained("yp-edu/gpt2-stockfish-debug") # or "gpt2"
tokenizer.pad_token = tokenizer.eos_token
for i in range(100):
fen = board.fen()
move_uci = next_move(model, tokenizer, fen)
try:
print(move_uci)
move = chess.Move.from_uci(move_uci)
if move not in board.legal_moves:
raise chess.IllegalMoveError
board.push(move)
outcome = board.outcome()
if outcome is not None:
print(board)
print(outcome.result())
break
except chess.IllegalMoveError:
print(board)
print("Illegal move", i)
break
else:
print(board)
```
|
vikash06/mistral_v1
|
vikash06
| 2024-03-04T15:36:14Z | 1,502 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-23T07:08:08Z |
---
license: mit
model-index:
- name: mistral_v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
---
This model is trained on experimental basis on a small dataset to assess whether training longer on a smaller dataset has a good performance or not.
# Model Details
vikash06/llama-2-7b-small-model--> Finetuned model on llama2
# Uses
# Creative Writing: Write a question or instruction that requires a creative, open-ended written response.
The instruction should be reasonable to ask of a person with general world knowledge and should not require searching.
In this task, your prompt should give very specific instructions to follow.
Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
# Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia.
The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Open QA: Write a question that can be answered using general world knowledge or at most a single search.
This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
# Summarization: Give a summary of a paragraph from Wikipedia.
Please don't ask questions that will require more than 3-5 minutes to answer.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage.
Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc.
In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.).
You can choose any categories for classification you like, the more diverse the better.
# Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas
# Direct Use
The model is intnded for direct use
# How to Get Started with the Model
import torch
import pandas as pd
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("vikash06/llama-2-7b-small-model")
model = AutoModelForCausalLM.from_pretrained("vikash06/llama-2-7b-small-model", torch_dtype=torch.float16, device_map="cuda:0")
print (model)
def generate_training_prompt(instruction,context):
return f"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {instruction}
### Context:
{context.strip()}
""".strip()
data1 ={"instruction": "When was the first Reading railway station opened?", "context": "Reading railway station is a major transport hub in Reading, Berkshire, England. It is on the northern edge of the town centre, near the main retail and commercial areas and the River Thames, 36 miles (58 km) from London Paddington. The first Reading station was opened on 30 March 1840 as the temporary western terminus of the original line of the Great Western Railway (GWR). Reading is the ninth-busiest station in the UK outside London and the second busiest interchange station outside London with over 3.8 million passengers changing trains at the station annually.", "response": "The first Reading railway station was opened on the 30th of March, 1840.", "category": "closed_qa"}
prompt = generate_training_prompt(data1["instruction"],data1["context"])
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda(0)
outputs = model.generate(input_ids=input_ids, max_new_tokens=128, do_sample=True, top_p=0.9,temperature=0.3)
resp = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):].split("\n")
resp = [x for x in resp if x!='']
print(resp)
# Training Data
1000 samples were carefully selected from each of the category.
# Training Procedure
We used the below libraries to finetune the llama2-7b:
torch==2.1.0
transformers==4.35.2
peft@git+https://github.com/huggingface/peft.git bitsandbytes==0.41.1 trl @ git+https://github.com/lvwerra/trl.git@34e6948d459540a21f80c5be227fb4da039dd97a
We used batch size 0f 2 on 50 epochs
# Evaluation
We performed hellaswag task using evaluation library of EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness
below are the results:

# Environmental Impact
Carbon Emitted: 0.432 kg/kWh Offset: 0% hardware: a6000 48GB(3) hours: 28
# Technical Report
Detail writeup coming soon
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vikash06__mistral_v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.85|
|AI2 Reasoning Challenge (25-Shot)|47.01|
|HellaSwag (10-Shot) |67.58|
|MMLU (5-Shot) |48.68|
|TruthfulQA (0-shot) |37.53|
|Winogrande (5-shot) |64.80|
|GSM8k (5-shot) | 9.48|
|
cloudyu/mistral_11B_instruct_v0.1
|
cloudyu
| 2024-03-04T15:32:31Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T15:03:44Z |
---
license: cc-by-4.0
---
```
this is a 48 layers model based on mistral architecture
sft by vicgalle/alpaca-gpt4.
template is "{instruction} {inputs} \n {output}"
```
example output
```
<s> write me a story aboyt yosemite.
Here is a short story about the beauty and wonder of Yosemite National Park:
As the sun rose over the mountains, its warm golden rays illuminated the once- shadowed walls of the grandeur that was Yosemite. The early morning hush was broken by the sound of birds singing and the gentle rustling of leaves as the wind blowing through the trees.
In the heart of this natural wonder, there lived a young girl named Sarah. She had grown up in the village at the edge of the park, and her parents were some of the few who had made their homes there. They were farmers, but their real love was for the beauty of the land around them.
One day, while exploring the woods, Sarah stumbled upon a hidden clearing. It was filled with wildflowers, and in the center stood an old oak tree, its branches stretching out to provide shelter to many bird species. As she sat down under the tree, taking in the beauty of her surroundings, she heard a soft, melodious call.
It was a baby deer, and it had fallen into a narrow stream. Its mother was frantic with worry, calling out to her child from the shore. Sarah quickly ran to help, and with the help of a group of travelers passing through the park, they managed to save the baby.
From then on, Sarah would often visit the clearing, and soon, the animals began to trust her. She would sit and listen to the birds chirp, watch the butterflies flutter by, and even play with the curious foxes.
Years went by, and Sarah grew up to become a renowned conservationist, dedicated to preserving the beauty of nature. And so, thanks to her efforts, generations of people have been able to explore and appreciate the wonder of Yosemite.
The sun set over the mountains, casting long shadows across the landscape. As Sarah walked back to her village, she knew that she would never forget the memories she had made within those mountains, and that the magic of Yosemite would always remain with her.
And so, the cycle of life continued, with each day bringing new adventures and discoveries within the vast expanse of the beautiful and timeless Yosemite National Park.
```
|
allensun123/clip-roberta-finetuned
|
allensun123
| 2024-03-04T15:31:23Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:ydshieh/coco_dataset_script",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-04T12:23:50Z |
---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model was trained from scratch on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.38.0
- Pytorch 2.1.2
- Datasets 2.14.7
- Tokenizers 0.15.2
|
core-3/kuno-royale-v2-7b
|
core-3
| 2024-03-04T15:23:02Z | 113 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T01:13:33Z |
---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
model-index:
- name: kuno-royale-v2-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
name: Open LLM Leaderboard
---

# kuno-royale-v2-7b
An attempt to further strengthen the roleplaying prose of [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) using [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO), a high-scorer for 7B models on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
Personal RP tests prove promising, and meaningless leaderboard metrics have improved vs [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B).
Some GGUF quants available [here](https://huggingface.co/core-3/kuno-royale-v2-7b-GGUF).
Works well with Silly Tavern Noromaid template recommended by [SanjiWatsuki for Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B): [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json)
|Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|-------------------|---------|-----|-----------|------|------------|------------|-------|
| eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 |
| **core-3/kuno-royale-v2-7b** | **74.80** | **72.01** | **88.15** | **65.07** | **71.10** | **82.24** | **70.20** |
| [core-3/kuno-royale-7B](https://huggingface.co/core-3/kuno-royale-7B) | 74.74 | 71.76 | 88.20 | 65.13 | 71.12 | 82.32 | 69.90
| SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 |
| SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 |
# Original LazyMergekit Card:
kuno-royale-v2-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-royale-v2-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
core-3/kuno-royale-7B
|
core-3
| 2024-03-04T15:21:53Z | 61 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"eren23/ogno-monarch-jaskier-merge-7b",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T20:21:08Z |
---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b
model-index:
- name: kuno-royale-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.12
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B
name: Open LLM Leaderboard
---
# kuno-royale-7B
[v2 is probably better](https://huggingface.co/core-3/kuno-royale-v2-7b) 🤷
|Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|-------------------|---------|-----|-----------|------|------------|------------|-------|
| eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 |
| [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b) | 74.80 | 72.01 | 88.15 | 65.07 | 71.10 | 82.24 | 70.20 |
| **core-3/kuno-royale-7B** | **74.74** | **71.76** | **88.20** | **65.13** | **71.12** | **82.32** | **69.90**
| SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 |
| SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 |
## Original LazyMergekit Card:
kuno-royale-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-royale-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mlabonne/Gemmalpaca-2B
|
mlabonne
| 2024-03-04T15:18:57Z | 128 | 14 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:vicgalle/alpaca-gpt4",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-22T23:17:48Z |
---
license: other
library_name: transformers
datasets:
- vicgalle/alpaca-gpt4
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model:
- google/gemma-2b
model-index:
- name: Gemmalpaca-2B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 48.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.24
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
---

# Gemmalpaca-2B
This is gemma-2b model supervised fine-tuned on the [vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) dataset. It outperforms gemma-2b-it, Google's chat version, on Nous' benchmark suite.
It's mostly a test to see how fine-tuning works with Gemma models on a well-known dataset. It turned out better than expected. :)
## 🔍 Applications
This model has a context length of 8k. I recommend using it with the Alpaca chat template and NOT the Gemma Instruct template (works perfectly with LM Studio). You also want to add `</s>` as a stop token.
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/Gemmalpaca-2B-GGUF
## 🏆 Evaluation
### Nous
Gemmalpaca-2B outperforms gemma-2b and gemma-2b-it on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 |
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
### [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Gemmalpaca-2B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.65|
|AI2 Reasoning Challenge (25-Shot)|48.72|
|HellaSwag (10-Shot) |71.36|
|MMLU (5-Shot) |36.30|
|TruthfulQA (0-shot) |41.24|
|Winogrande (5-shot) |65.59|
|GSM8k (5-shot) |10.69|
## 🧩 Configuration
It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
```yaml
base_model: alpindale/gemma-2b
model_type: GemmaForCausalLM
tokenizer_type: GemmaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: vicgalle/alpaca-gpt4
type: alpaca
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: <s>
eos_token: </s>
unk_token: <unk>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
mlabonne/Darewin-7B
|
mlabonne
| 2024-03-04T15:17:37Z | 52 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:merge:Intel/neural-chat-7b-v3-3",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:merge:OpenPipe/mistral-ft-optimized-1227",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"base_model:merge:fblgit/una-cybertron-7b-v2-bf16",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:openaccess-ai-collective/DPOpenHermes-7B-v2",
"base_model:merge:openaccess-ai-collective/DPOpenHermes-7B-v2",
"base_model:openchat/openchat-3.5-0106",
"base_model:merge:openchat/openchat-3.5-0106",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T00:49:44Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- Intel/neural-chat-7b-v3-3
- openaccess-ai-collective/DPOpenHermes-7B-v2
- fblgit/una-cybertron-7b-v2-bf16
- openchat/openchat-3.5-0106
- OpenPipe/mistral-ft-optimized-1227
- mlabonne/NeuralHermes-2.5-Mistral-7B
model-index:
- name: Darewin-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.38
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
---
# Darewin-7B
Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
* [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.6
weight: 0.2
- model: openaccess-ai-collective/DPOpenHermes-7B-v2
parameters:
density: 0.6
weight: 0.1
- model: fblgit/una-cybertron-7b-v2-bf16
parameters:
density: 0.6
weight: 0.2
- model: openchat/openchat-3.5-0106
parameters:
density: 0.6
weight: 0.15
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.25
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.1
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Darewin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.87|
|AI2 Reasoning Challenge (25-Shot)|68.60|
|HellaSwag (10-Shot) |86.22|
|MMLU (5-Shot) |65.21|
|TruthfulQA (0-shot) |60.38|
|Winogrande (5-shot) |79.79|
|GSM8k (5-shot) |71.04|
|
mlabonne/NeuralHermes-2.5-Mistral-7B-laser
|
mlabonne
| 2024-03-04T15:17:29Z | 19 | 16 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"laser",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"arxiv:2312.13558",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T21:54:27Z |
---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
- laser
datasets:
- mlabonne/chatml_dpo_pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: NeuralHermes-2.5-Mistral-7B-laser
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.95
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
---
<center><img src="https://i.imgur.com/gUlEJuU.jpeg"></center>
# NeuralHermes 2.5 - Mistral 7B - LASER
This is an experimental LASER version of NeuralHermes using [laserRMT](https://github.com/cognitivecomputations/laserRMT), based on [this paper](https://arxiv.org/pdf/2312.13558.pdf).
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)| 43.54| 73.44| 55.26| 42.24| 53.62|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) | 43.67| 73.24| 55.37| 41.76| 53.51|
Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024.
NeuralHermes is an [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
## Results
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |21.26|± | 2.57|
| | |acc_norm|22.83|± | 2.64|
|agieval_logiqa_en | 0|acc |39.32|± | 1.92|
| | |acc_norm|40.71|± | 1.93|
|agieval_lsat_ar | 0|acc |25.65|± | 2.89|
| | |acc_norm|25.65|± | 2.89|
|agieval_lsat_lr | 0|acc |48.82|± | 2.22|
| | |acc_norm|50.00|± | 2.22|
|agieval_lsat_rc | 0|acc |58.36|± | 3.01|
| | |acc_norm|57.25|± | 3.02|
|agieval_sat_en | 0|acc |74.27|± | 3.05|
| | |acc_norm|73.30|± | 3.09|
|agieval_sat_en_without_passage| 0|acc |43.69|± | 3.46|
| | |acc_norm|42.23|± | 3.45|
|agieval_sat_math | 0|acc |37.27|± | 3.27|
| | |acc_norm|36.36|± | 3.25|
Average: 43.54%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |57.76|± | 1.44|
| | |acc_norm|60.32|± | 1.43|
|arc_easy | 0|acc |83.84|± | 0.76|
| | |acc_norm|81.10|± | 0.80|
|boolq | 1|acc |86.70|± | 0.59|
|hellaswag | 0|acc |63.15|± | 0.48|
| | |acc_norm|82.55|± | 0.38|
|openbookqa | 0|acc |34.40|± | 2.13|
| | |acc_norm|45.20|± | 2.23|
|piqa | 0|acc |81.94|± | 0.90|
| | |acc_norm|82.97|± | 0.88|
|winogrande | 0|acc |75.22|± | 1.21|
Average: 73.44%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |37.70|± | 1.70|
| | |mc2 |55.26|± | 1.52|
Average: 55.26%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|53.16|± | 3.63|
|bigbench_date_understanding | 0|multiple_choice_grade|65.31|± | 2.48|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|34.11|± | 2.96|
|bigbench_geometric_shapes | 0|multiple_choice_grade|27.02|± | 2.35|
| | |exact_str_match | 0.28|± | 0.28|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|27.80|± | 2.01|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.86|± | 1.51|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|48.33|± | 2.89|
|bigbench_movie_recommendation | 0|multiple_choice_grade|41.40|± | 2.20|
|bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|65.00|± | 1.07|
|bigbench_ruin_names | 0|multiple_choice_grade|46.21|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|27.25|± | 1.41|
|bigbench_snarks | 0|multiple_choice_grade|70.72|± | 3.39|
|bigbench_sports_understanding | 0|multiple_choice_grade|65.72|± | 1.51|
|bigbench_temporal_sequences | 0|multiple_choice_grade|30.40|± | 1.46|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.56|± | 1.18|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|17.09|± | 0.90|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|48.33|± | 2.89|
Average: 42.24%
Average score: 53.62%
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model="mlabonne/NeuralHermes-2.5-Mistral-7B-laser",
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralHermes-2.5-Mistral-7B-laser)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.29|
|AI2 Reasoning Challenge (25-Shot)|66.38|
|HellaSwag (10-Shot) |85.09|
|MMLU (5-Shot) |63.43|
|TruthfulQA (0-shot) |54.95|
|Winogrande (5-shot) |78.14|
|GSM8k (5-shot) |55.72|
|
mlabonne/NeuralMarcoro14-7B
|
mlabonne
| 2024-03-04T15:16:01Z | 2,499 | 39 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mlabonne/Marcoro14-7B-slerp",
"dpo",
"rlhf",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:mlabonne/Marcoro14-7B-slerp",
"base_model:finetune:mlabonne/Marcoro14-7B-slerp",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T16:28:52Z |
---
license: cc-by-nc-4.0
tags:
- mlabonne/Marcoro14-7B-slerp
- dpo
- rlhf
- merge
- mergekit
- lazymergekit
datasets:
- mlabonne/chatml_dpo_pairs
base_model: mlabonne/Marcoro14-7B-slerp
model-index:
- name: NeuralMarcoro14-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
---

# NeuralMarcoro14-7B
This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset.
It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark.
It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24).
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF
## 🏆 Evaluation
### Open LLM Leaderboard


### Nous
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|-------------------------|------:|------:|---------:|-------:|------:|
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
|Change | -0.07| -0.07| +1.79| +1.26| +0.73|
## 🧩 Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralMarcoro14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
piazzola/test1
|
piazzola
| 2024-03-04T15:15:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-efficient-base",
"base_model:finetune:google/t5-efficient-base",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-02T05:56:21Z |
---
license: cc-by-nc-nd-4.0
base_model: google/t5-efficient-base
tags:
- generated_from_trainer
model-index:
- name: checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# how to use the model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
tokenizer = AutoTokenizer.from_pretrained("piazzola/test1")
model = AutoModelForSeq2SeqLM.from_pretrained("piazzola/test1")
sentence = "i left the keys in the car."
with torch.no_grad():
inputs = tokenizer([sentence], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
generated_text = tokenizer.decode(outputs[0])
print(generated_text)
```
# checkpoint
This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3088 | 0.3 | 7458 | 0.2634 |
| 0.2615 | 0.6 | 14916 | 0.2143 |
| 0.2294 | 0.9 | 22374 | 0.1951 |
| 0.2137 | 1.2 | 29832 | 0.1830 |
| 0.1944 | 1.5 | 37290 | 0.1736 |
| 0.1918 | 1.8 | 44748 | 0.1682 |
| 0.18 | 2.1 | 52206 | 0.1659 |
| 0.1801 | 2.4 | 59664 | 0.1623 |
| 0.185 | 2.7 | 67122 | 0.1609 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
n2ygk/whisper-tiny-minds
|
n2ygk
| 2024-03-04T15:15:20Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-01T15:24:46Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3282172373081464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7236
- Wer Ortho: 0.3270
- Wer: 0.3282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0006 | 17.86 | 500 | 0.6651 | 0.3257 | 0.3264 |
| 0.0002 | 35.71 | 1000 | 0.7236 | 0.3270 | 0.3282 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
smutuvi/wav2vec2-large-xlsr-sw-common-voice-16
|
smutuvi
| 2024-03-04T15:14:30Z | 62 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-01T16:46:07Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-sw-common-voice-16
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 0.3082326604654753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-sw-common-voice-16
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.3082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 5.8691 | 0.11 | 400 | inf | 1.0 |
| 1.673 | 0.22 | 800 | inf | 0.7178 |
| 0.6454 | 0.33 | 1200 | inf | 0.6274 |
| 0.5527 | 0.44 | 1600 | inf | 0.5747 |
| 0.4989 | 0.55 | 2000 | inf | 0.5174 |
| 0.4827 | 0.66 | 2400 | inf | 0.5302 |
| 0.4462 | 0.77 | 2800 | inf | 0.4916 |
| 0.4374 | 0.88 | 3200 | inf | 0.4769 |
| 0.4183 | 0.99 | 3600 | inf | 0.4687 |
| 0.3854 | 1.1 | 4000 | inf | 0.4669 |
| 0.3802 | 1.2 | 4400 | inf | 0.4513 |
| 0.3727 | 1.31 | 4800 | inf | 0.4505 |
| 0.3694 | 1.42 | 5200 | inf | 0.4405 |
| 0.3709 | 1.53 | 5600 | inf | 0.4364 |
| 0.363 | 1.64 | 6000 | inf | 0.4318 |
| 0.3669 | 1.75 | 6400 | inf | 0.4398 |
| 0.3597 | 1.86 | 6800 | inf | 0.4353 |
| 0.3541 | 1.97 | 7200 | inf | 0.4251 |
| 0.3277 | 2.08 | 7600 | inf | 0.4153 |
| 0.3211 | 2.19 | 8000 | inf | 0.4178 |
| 0.3225 | 2.3 | 8400 | inf | 0.4267 |
| 0.3215 | 2.41 | 8800 | inf | 0.4139 |
| 0.3224 | 2.52 | 9200 | inf | 0.4054 |
| 0.3106 | 2.63 | 9600 | inf | 0.4155 |
| 0.3141 | 2.74 | 10000 | inf | 0.4188 |
| 0.3189 | 2.85 | 10400 | inf | 0.4036 |
| 0.3213 | 2.96 | 10800 | inf | 0.4071 |
| 0.3005 | 3.07 | 11200 | inf | 0.3954 |
| 0.2872 | 3.18 | 11600 | inf | 0.3974 |
| 0.2855 | 3.29 | 12000 | inf | 0.3982 |
| 0.2898 | 3.39 | 12400 | inf | 0.3987 |
| 0.288 | 3.5 | 12800 | inf | 0.4021 |
| 0.2941 | 3.61 | 13200 | inf | 0.3955 |
| 0.2951 | 3.72 | 13600 | inf | 0.4022 |
| 0.2916 | 3.83 | 14000 | inf | 0.3960 |
| 0.2896 | 3.94 | 14400 | inf | 0.3903 |
| 0.2794 | 4.05 | 14800 | inf | 0.3918 |
| 0.2707 | 4.16 | 15200 | inf | 0.3873 |
| 0.2682 | 4.27 | 15600 | inf | 0.3927 |
| 0.2654 | 4.38 | 16000 | inf | 0.3844 |
| 0.2699 | 4.49 | 16400 | inf | 0.3909 |
| 0.2723 | 4.6 | 16800 | inf | 0.3904 |
| 0.2762 | 4.71 | 17200 | inf | 0.3857 |
| 0.2621 | 4.82 | 17600 | inf | 0.3795 |
| 0.26 | 4.93 | 18000 | inf | 0.3764 |
| 0.2659 | 5.04 | 18400 | inf | 0.3842 |
| 0.2479 | 5.15 | 18800 | inf | 0.3719 |
| 0.2518 | 5.26 | 19200 | inf | 0.3822 |
| 0.2591 | 5.37 | 19600 | inf | 0.3837 |
| 0.2491 | 5.48 | 20000 | inf | 0.3871 |
| 0.2466 | 5.59 | 20400 | inf | 0.3747 |
| 0.2519 | 5.69 | 20800 | inf | 0.3788 |
| 0.2516 | 5.8 | 21200 | inf | 0.3781 |
| 0.2422 | 5.91 | 21600 | inf | 0.3751 |
| 0.2439 | 6.02 | 22000 | inf | 0.3693 |
| 0.2327 | 6.13 | 22400 | inf | 0.3752 |
| 0.2327 | 6.24 | 22800 | inf | 0.3706 |
| 0.2302 | 6.35 | 23200 | inf | 0.3687 |
| 0.2313 | 6.46 | 23600 | inf | 0.3690 |
| 0.2363 | 6.57 | 24000 | inf | 0.3686 |
| 0.2329 | 6.68 | 24400 | inf | 0.3681 |
| 0.2328 | 6.79 | 24800 | inf | 0.3626 |
| 0.2329 | 6.9 | 25200 | inf | 0.3652 |
| 0.2254 | 7.01 | 25600 | inf | 0.3606 |
| 0.2124 | 7.12 | 26000 | inf | 0.3648 |
| 0.2206 | 7.23 | 26400 | inf | 0.3686 |
| 0.2151 | 7.34 | 26800 | inf | 0.3646 |
| 0.2167 | 7.45 | 27200 | inf | 0.3630 |
| 0.2196 | 7.56 | 27600 | inf | 0.3597 |
| 0.2089 | 7.67 | 28000 | inf | 0.3561 |
| 0.2183 | 7.78 | 28400 | inf | 0.3593 |
| 0.2148 | 7.89 | 28800 | inf | 0.3580 |
| 0.2232 | 7.99 | 29200 | inf | 0.3597 |
| 0.2002 | 8.1 | 29600 | inf | 0.3581 |
| 0.1924 | 8.21 | 30000 | inf | 0.3585 |
| 0.2046 | 8.32 | 30400 | inf | 0.3606 |
| 0.2057 | 8.43 | 30800 | inf | 0.3611 |
| 0.2042 | 8.54 | 31200 | inf | 0.3618 |
| 0.21 | 8.65 | 31600 | inf | 0.3599 |
| 0.2076 | 8.76 | 32000 | inf | 0.3568 |
| 0.208 | 8.87 | 32400 | inf | 0.3564 |
| 0.2154 | 8.98 | 32800 | inf | 0.3566 |
| 0.1991 | 9.09 | 33200 | inf | 0.3621 |
| 0.1986 | 9.2 | 33600 | inf | 0.3571 |
| 0.1898 | 9.31 | 34000 | inf | 0.3515 |
| 0.1961 | 9.42 | 34400 | inf | 0.3559 |
| 0.1947 | 9.53 | 34800 | inf | 0.3521 |
| 0.1886 | 9.64 | 35200 | inf | 0.3500 |
| 0.1901 | 9.75 | 35600 | inf | 0.3557 |
| 0.1998 | 9.86 | 36000 | inf | 0.3547 |
| 0.1873 | 9.97 | 36400 | inf | 0.3498 |
| 0.1858 | 10.08 | 36800 | inf | 0.3552 |
| 0.1804 | 10.18 | 37200 | inf | 0.3518 |
| 0.18 | 10.29 | 37600 | inf | 0.3504 |
| 0.1777 | 10.4 | 38000 | inf | 0.3532 |
| 0.1777 | 10.51 | 38400 | inf | 0.3530 |
| 0.1801 | 10.62 | 38800 | inf | 0.3515 |
| 0.1839 | 10.73 | 39200 | inf | 0.3538 |
| 0.1913 | 10.84 | 39600 | inf | 0.3554 |
| 0.1909 | 10.95 | 40000 | inf | 0.3479 |
| 0.1812 | 11.06 | 40400 | inf | 0.3467 |
| 0.1664 | 11.17 | 40800 | inf | 0.3491 |
| 0.175 | 11.28 | 41200 | inf | 0.3446 |
| 0.1733 | 11.39 | 41600 | inf | 0.3464 |
| 0.1709 | 11.5 | 42000 | inf | 0.3467 |
| 0.1777 | 11.61 | 42400 | inf | 0.3469 |
| 0.1735 | 11.72 | 42800 | inf | 0.3452 |
| 0.1765 | 11.83 | 43200 | inf | 0.3471 |
| 0.1738 | 11.94 | 43600 | inf | 0.3496 |
| 0.1649 | 12.05 | 44000 | inf | 0.3445 |
| 0.1601 | 12.16 | 44400 | inf | 0.3464 |
| 0.1603 | 12.27 | 44800 | inf | 0.3416 |
| 0.1634 | 12.38 | 45200 | inf | 0.3445 |
| 0.1628 | 12.48 | 45600 | inf | 0.3452 |
| 0.1621 | 12.59 | 46000 | inf | 0.3403 |
| 0.1596 | 12.7 | 46400 | inf | 0.3394 |
| 0.1589 | 12.81 | 46800 | inf | 0.3401 |
| 0.1632 | 12.92 | 47200 | inf | 0.3403 |
| 0.163 | 13.03 | 47600 | inf | 0.3429 |
| 0.1516 | 13.14 | 48000 | inf | 0.3417 |
| 0.1506 | 13.25 | 48400 | inf | 0.3417 |
| 0.1568 | 13.36 | 48800 | inf | 0.3410 |
| 0.1543 | 13.47 | 49200 | inf | 0.3409 |
| 0.1574 | 13.58 | 49600 | inf | 0.3408 |
| 0.1555 | 13.69 | 50000 | inf | 0.3424 |
| 0.1535 | 13.8 | 50400 | inf | 0.3395 |
| 0.1539 | 13.91 | 50800 | inf | 0.3409 |
| 0.1528 | 14.02 | 51200 | inf | 0.3406 |
| 0.1411 | 14.13 | 51600 | inf | 0.3366 |
| 0.1413 | 14.24 | 52000 | inf | 0.3402 |
| 0.1477 | 14.35 | 52400 | inf | 0.3386 |
| 0.1433 | 14.46 | 52800 | inf | 0.3356 |
| 0.1446 | 14.57 | 53200 | inf | 0.3357 |
| 0.1427 | 14.67 | 53600 | inf | 0.3378 |
| 0.1462 | 14.78 | 54000 | inf | 0.3328 |
| 0.1436 | 14.89 | 54400 | inf | 0.3358 |
| 0.1434 | 15.0 | 54800 | inf | 0.3366 |
| 0.135 | 15.11 | 55200 | inf | 0.3354 |
| 0.1375 | 15.22 | 55600 | inf | 0.3355 |
| 0.1366 | 15.33 | 56000 | inf | 0.3356 |
| 0.1389 | 15.44 | 56400 | inf | 0.3336 |
| 0.1378 | 15.55 | 56800 | inf | 0.3364 |
| 0.1362 | 15.66 | 57200 | inf | 0.3325 |
| 0.1376 | 15.77 | 57600 | inf | 0.3361 |
| 0.1323 | 15.88 | 58000 | inf | 0.3364 |
| 0.1343 | 15.99 | 58400 | inf | 0.3332 |
| 0.1257 | 16.1 | 58800 | inf | 0.3339 |
| 0.1239 | 16.21 | 59200 | inf | 0.3316 |
| 0.1292 | 16.32 | 59600 | inf | 0.3313 |
| 0.1297 | 16.43 | 60000 | inf | 0.3332 |
| 0.1265 | 16.54 | 60400 | inf | 0.3313 |
| 0.1271 | 16.65 | 60800 | inf | 0.3310 |
| 0.1315 | 16.76 | 61200 | inf | 0.3307 |
| 0.1271 | 16.87 | 61600 | inf | 0.3337 |
| 0.1298 | 16.97 | 62000 | inf | 0.3318 |
| 0.1211 | 17.08 | 62400 | inf | 0.3326 |
| 0.1192 | 17.19 | 62800 | inf | 0.3290 |
| 0.1232 | 17.3 | 63200 | inf | 0.3291 |
| 0.1229 | 17.41 | 63600 | inf | 0.3349 |
| 0.1162 | 17.52 | 64000 | inf | 0.3281 |
| 0.1207 | 17.63 | 64400 | inf | 0.3308 |
| 0.1179 | 17.74 | 64800 | inf | 0.3257 |
| 0.1207 | 17.85 | 65200 | inf | 0.3290 |
| 0.1256 | 17.96 | 65600 | inf | 0.3297 |
| 0.119 | 18.07 | 66000 | inf | 0.3279 |
| 0.1111 | 18.18 | 66400 | inf | 0.3302 |
| 0.1086 | 18.29 | 66800 | inf | 0.3285 |
| 0.1179 | 18.4 | 67200 | inf | 0.3274 |
| 0.1099 | 18.51 | 67600 | inf | 0.3281 |
| 0.1141 | 18.62 | 68000 | inf | 0.3281 |
| 0.1091 | 18.73 | 68400 | inf | 0.3301 |
| 0.1147 | 18.84 | 68800 | inf | 0.3270 |
| 0.1158 | 18.95 | 69200 | inf | 0.3246 |
| 0.1111 | 19.06 | 69600 | inf | 0.3227 |
| 0.1075 | 19.16 | 70000 | inf | 0.3249 |
| 0.1051 | 19.27 | 70400 | inf | 0.3253 |
| 0.1029 | 19.38 | 70800 | inf | 0.3252 |
| 0.1039 | 19.49 | 71200 | inf | 0.3264 |
| 0.1063 | 19.6 | 71600 | inf | 0.3242 |
| 0.1071 | 19.71 | 72000 | inf | 0.3250 |
| 0.1063 | 19.82 | 72400 | inf | 0.3248 |
| 0.1085 | 19.93 | 72800 | inf | 0.3247 |
| 0.1038 | 20.04 | 73200 | inf | 0.3242 |
| 0.1017 | 20.15 | 73600 | inf | 0.3255 |
| 0.099 | 20.26 | 74000 | inf | 0.3247 |
| 0.0971 | 20.37 | 74400 | inf | 0.3258 |
| 0.1002 | 20.48 | 74800 | inf | 0.3223 |
| 0.1013 | 20.59 | 75200 | inf | 0.3230 |
| 0.1018 | 20.7 | 75600 | inf | 0.3232 |
| 0.0967 | 20.81 | 76000 | inf | 0.3215 |
| 0.1008 | 20.92 | 76400 | inf | 0.3212 |
| 0.0975 | 21.03 | 76800 | inf | 0.3191 |
| 0.0893 | 21.14 | 77200 | inf | 0.3210 |
| 0.0911 | 21.25 | 77600 | inf | 0.3206 |
| 0.0959 | 21.36 | 78000 | inf | 0.3211 |
| 0.094 | 21.46 | 78400 | inf | 0.3198 |
| 0.0939 | 21.57 | 78800 | inf | 0.3202 |
| 0.0936 | 21.68 | 79200 | inf | 0.3202 |
| 0.0938 | 21.79 | 79600 | inf | 0.3195 |
| 0.0938 | 21.9 | 80000 | inf | 0.3184 |
| 0.0916 | 22.01 | 80400 | inf | 0.3185 |
| 0.0858 | 22.12 | 80800 | inf | 0.3177 |
| 0.0909 | 22.23 | 81200 | inf | 0.3211 |
| 0.0915 | 22.34 | 81600 | inf | 0.3222 |
| 0.088 | 22.45 | 82000 | inf | 0.3194 |
| 0.0902 | 22.56 | 82400 | inf | 0.3199 |
| 0.0868 | 22.67 | 82800 | inf | 0.3174 |
| 0.0871 | 22.78 | 83200 | inf | 0.3201 |
| 0.0908 | 22.89 | 83600 | inf | 0.3177 |
| 0.0842 | 23.0 | 84000 | inf | 0.3187 |
| 0.0842 | 23.11 | 84400 | inf | 0.3168 |
| 0.0815 | 23.22 | 84800 | inf | 0.3187 |
| 0.084 | 23.33 | 85200 | inf | 0.3201 |
| 0.0835 | 23.44 | 85600 | inf | 0.3185 |
| 0.0821 | 23.55 | 86000 | inf | 0.3189 |
| 0.0836 | 23.66 | 86400 | inf | 0.3179 |
| 0.0816 | 23.76 | 86800 | inf | 0.3174 |
| 0.0847 | 23.87 | 87200 | inf | 0.3172 |
| 0.0828 | 23.98 | 87600 | inf | 0.3178 |
| 0.0796 | 24.09 | 88000 | inf | 0.3144 |
| 0.0793 | 24.2 | 88400 | inf | 0.3149 |
| 0.0773 | 24.31 | 88800 | inf | 0.3165 |
| 0.0808 | 24.42 | 89200 | inf | 0.3154 |
| 0.0743 | 24.53 | 89600 | inf | 0.3159 |
| 0.078 | 24.64 | 90000 | inf | 0.3145 |
| 0.0792 | 24.75 | 90400 | inf | 0.3170 |
| 0.0775 | 24.86 | 90800 | inf | 0.3134 |
| 0.0763 | 24.97 | 91200 | inf | 0.3144 |
| 0.0705 | 25.08 | 91600 | inf | 0.3138 |
| 0.0724 | 25.19 | 92000 | inf | 0.3156 |
| 0.0732 | 25.3 | 92400 | inf | 0.3158 |
| 0.0743 | 25.41 | 92800 | inf | 0.3144 |
| 0.0729 | 25.52 | 93200 | inf | 0.3133 |
| 0.071 | 25.63 | 93600 | inf | 0.3139 |
| 0.0764 | 25.74 | 94000 | inf | 0.3122 |
| 0.0726 | 25.85 | 94400 | inf | 0.3128 |
| 0.0714 | 25.95 | 94800 | inf | 0.3135 |
| 0.0725 | 26.06 | 95200 | inf | 0.3147 |
| 0.0711 | 26.17 | 95600 | inf | 0.3130 |
| 0.0684 | 26.28 | 96000 | inf | 0.3125 |
| 0.0683 | 26.39 | 96400 | inf | 0.3144 |
| 0.0698 | 26.5 | 96800 | inf | 0.3135 |
| 0.0687 | 26.61 | 97200 | inf | 0.3131 |
| 0.0675 | 26.72 | 97600 | inf | 0.3119 |
| 0.0678 | 26.83 | 98000 | inf | 0.3105 |
| 0.0677 | 26.94 | 98400 | inf | 0.3102 |
| 0.068 | 27.05 | 98800 | inf | 0.3128 |
| 0.0694 | 27.16 | 99200 | inf | 0.3111 |
| 0.0681 | 27.27 | 99600 | inf | 0.3118 |
| 0.0656 | 27.38 | 100000 | inf | 0.3110 |
| 0.065 | 27.49 | 100400 | inf | 0.3113 |
| 0.0649 | 27.6 | 100800 | inf | 0.3113 |
| 0.0643 | 27.71 | 101200 | inf | 0.3107 |
| 0.0651 | 27.82 | 101600 | inf | 0.3102 |
| 0.0643 | 27.93 | 102000 | inf | 0.3109 |
| 0.063 | 28.04 | 102400 | inf | 0.3110 |
| 0.0604 | 28.15 | 102800 | inf | 0.3108 |
| 0.062 | 28.25 | 103200 | inf | 0.3110 |
| 0.0623 | 28.36 | 103600 | inf | 0.3106 |
| 0.063 | 28.47 | 104000 | inf | 0.3102 |
| 0.0619 | 28.58 | 104400 | inf | 0.3101 |
| 0.0636 | 28.69 | 104800 | inf | 0.3108 |
| 0.0636 | 28.8 | 105200 | inf | 0.3099 |
| 0.0643 | 28.91 | 105600 | inf | 0.3089 |
| 0.0607 | 29.02 | 106000 | inf | 0.3094 |
| 0.0597 | 29.13 | 106400 | inf | 0.3091 |
| 0.0616 | 29.24 | 106800 | inf | 0.3087 |
| 0.0594 | 29.35 | 107200 | inf | 0.3087 |
| 0.0614 | 29.46 | 107600 | inf | 0.3087 |
| 0.06 | 29.57 | 108000 | inf | 0.3082 |
| 0.0617 | 29.68 | 108400 | inf | 0.3085 |
| 0.0574 | 29.79 | 108800 | inf | 0.3082 |
| 0.06 | 29.9 | 109200 | inf | 0.3082 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
GhostDragon01/habib_photo_LoRA_Realistic_Vision_V3_Turbo
|
GhostDragon01
| 2024-03-04T15:14:16Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:SG161222/RealVisXL_V3.0_Turbo",
"base_model:adapter:SG161222/RealVisXL_V3.0_Turbo",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-04T15:14:12Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: SG161222/RealVisXL_V3.0_Turbo
instance_prompt: a photo of a <HABIB> man
license: openrail++
---
# SDXL LoRA DreamBooth - GhostDragon01/habib_photo_LoRA_Realistic_Vision_V3_Turbo
<Gallery />
## Model description
These are GhostDragon01/habib_photo_LoRA_Realistic_Vision_V3_Turbo LoRA adaption weights for SG161222/RealVisXL_V3.0_Turbo.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of a <HABIB> man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](GhostDragon01/habib_photo_LoRA_Realistic_Vision_V3_Turbo/tree/main) them in the Files & versions tab.
|
Shashank91097/Idefic
|
Shashank91097
| 2024-03-04T15:11:27Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-18T13:36:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RENREN6/llama2-fine-tuned-openassistant
|
RENREN6
| 2024-03-04T15:10:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T14:33:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sophosympatheia/Aurora-Nights-70B-v1.0
|
sophosympatheia
| 2024-03-04T15:03:43Z | 1,444 | 22 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2307.11760",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-23T17:08:11Z |
---
language:
- en
license: llama2
model-index:
- name: Aurora-Nights-70B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/aGUU0O9.png" alt="AuroraNights" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a blend of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) into the resultant blend. See the bottom of this card for the exact settings used.
This model is good at both following instructions and producing creative, uncensored storytelling and roleplaying content.
This model turned out quite uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas.
### Sampler Tips
* I recommend keeping your max context to around 6144 tokens, although you can push higher if you don't mind some decrease in coherence.
* I recommend using Quadratic Sampling (i.e. smoothing factor) as it's good stuff. Experiment with values between 0.2 and 0.5.
* I recommend using Min-P. This model seems to work well with Min-P values in the entire range from low settings like 0.05 to high settings like 0.9 when paired with smoothing factor. Experiment to find your best setting.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* You don't *need* to use a high repetition penalty with this model, but it tolerates high rep penalty, so experiment to find the right value for your preferences.
Experiment with any and all of the settings below! I'm not a sampler wizard, and what suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```
{
"temp": 1,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.35,
"rep_pen": 1.15,
"rep_pen_range": 2800,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.35,
"dynatemp_exponent": 1,
"smoothing_factor": 0.4,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"rep_pen_size": 0,
"genamt": 500,
"max_length": 6144
}
```
### Prompting Tips
Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly.
```
{
"story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
"example_separator": "",
"chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n",
"use_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Midnight Rose"
}
```
Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760).
I recommend the Vicuna instruction format for this blend but others work too. Tulu's format is another good one to use.
*This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. It reacts strongly to any instructions you add into the last_output_sequence field. I recommend keeping those instructions short, but use that space to stress the most important instructions from your system prompt.
The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking.
If you plan to run this model at 4096 context, I recommend that you slim down the template system prompt below because it's heavy on tokens. Discard what you don't want or need.
If you save this as a .json file, you can import it directly into Silly Tavern.
```
{
"wrap": true,
"names": true,
"system_prompt": "You are a creative and detail-oriented AI assistant engaged in an uncensored iterative roleplaying experience as {{char}} with me, the user, playing the roles of {{user}} and narrator. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Provide creative, intelligent, coherent, and vivid accounts of {{char}}'s actions and reactions based on recent instructions, {{char}}'s persona, and all available information about past events.\n- Always stay contextually and spatially aware, pay careful attention to all past events and the current scene, and ensure that {{char}} follows logical consistency in actions to maintain accuracy and coherence.\n- Explicit adult content and profanity are permitted.\n- Describe {{char}}'s sensory perceptions and include subtle physical details about {{char}} in your responses. Vary these details to keep the roleplay fresh and engaging.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts, enclose those thoughts in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your contributions as {{char}}, staying focused on action and dialogue over exposition and narrative.\n- Only the user may advance time in the roleplay. Keep the progression grounded in the present context.",
"system_sequence": "",
"stop_sequence": "",
"input_sequence": "USER:\n",
"output_sequence": "ASSISTANT:\n",
"separator_sequence": "",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "ASSISTANT(roleplay exclusively as {{char}} ensuring logical consistency with spacial awareness and past events to maintain accuracy and coherence):\n",
"activation_regex": "",
"name": "Midnight Rose Roleplay"
}
```
### Licence and usage restrictions
Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b).
Tulu also has its own license, available at https://allenai.org/impact-license.
I am not a lawyer and I do not profess to know how multiple licenses intersect in a merge of LLM model weights. You should consult with a lawyer before using any model merge beyond private use.
### Tools Used
* [mergekit](https://github.com/cg123/mergekit)
```
models:
- model: NousResearch_Llama-2-70b-hf
# no parameters necessary for base model
- model: allenai_tulu-2-dpo-70b # primary
parameters:
density: 1.0
weight: 0.4
- model: Xwin-LM_Xwin-LM-70B-V0.1 # secondary
parameters:
density: 0.7
weight: 0.3
- model: dreamgen_opus-v0.5-70b # supporting, good at storytelling and roleplay
parameters:
density: 0.2
weight: 0.6
merge_method: dare_ties
base_model: NousResearch_Llama-2-70b-hf
parameters:
normalize: true
int8_mask: true
dtype: float32
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sophosympatheia__Aurora-Nights-70B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.77|
|AI2 Reasoning Challenge (25-Shot)|71.33|
|HellaSwag (10-Shot) |88.33|
|MMLU (5-Shot) |70.47|
|TruthfulQA (0-shot) |62.81|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |66.34|
|
MarkBW/jane-porter-tarzan
|
MarkBW
| 2024-03-04T15:03:16Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-04T15:02:36Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/Jane Porter (2).jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: blue eyes, brown hair, full body, gloves, white gloves, yellow dress
---
# jane-porter-tarzan
<Gallery />
## Trigger words
You should use `blue eyes` to trigger the image generation.
You should use `brown hair` to trigger the image generation.
You should use `full body` to trigger the image generation.
You should use `gloves` to trigger the image generation.
You should use `white gloves` to trigger the image generation.
You should use `yellow dress` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/jane-porter-tarzan/tree/main) them in the Files & versions tab.
|
piazzola/test2
|
piazzola
| 2024-03-04T15:02:59Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-efficient-base",
"base_model:finetune:google/t5-efficient-base",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-03T22:34:33Z |
---
license: cc-by-nc-nd-4.0
base_model: google/t5-efficient-base
tags:
- generated_from_trainer
model-index:
- name: checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# how to use the model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("piazzola/test2")
model = AutoModelForSeq2SeqLM.from_pretrained("piazzola/test2")
from transformers import pipeline
pipe = pipeline("text2text-generation", model="piazzola/test2")
sentence = "i left the keys in the car."
output = pipe(sentence, max_new_tokens=100, do_sample=True, temperature=0.1)
print(output)
```
# checkpoint
This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.3 | 240 | 1.4901 |
| No log | 0.6 | 480 | 0.7750 |
| 3.5263 | 0.9 | 720 | 0.5219 |
| 3.5263 | 1.2 | 960 | 0.3782 |
| 0.607 | 1.5 | 1200 | 0.3521 |
| 0.607 | 1.8 | 1440 | 0.3356 |
| 0.4173 | 2.1 | 1680 | 0.3255 |
| 0.4173 | 2.4 | 1920 | 0.3151 |
| 0.368 | 2.7 | 2160 | 0.3093 |
| 0.368 | 3.0 | 2400 | 0.3070 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Sebu/Reinforce-1
|
Sebu
| 2024-03-04T15:01:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T15:01:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 439.70 +/- 74.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
argilla/notux-8x7b-v1
|
argilla
| 2024-03-04T14:57:05Z | 58 | 165 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"dpo",
"rlaif",
"preference",
"ultrafeedback",
"moe",
"conversational",
"en",
"de",
"es",
"fr",
"it",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-12T16:11:49Z |
---
language:
- en
- de
- es
- fr
- it
license: apache-2.0
library_name: transformers
tags:
- dpo
- rlaif
- preference
- ultrafeedback
- moe
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
pipeline_tag: text-generation
model-index:
- name: notux-8x7b-v1
results: []
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/dj-spsk9eXMMXVGxK6jRz.png" alt="A banner representing Notus, the wind god of the south, in a mythical and artistic style. The banner features a strong, swirling breeze, embodying the warm, wet character of the southern wind. Gracefully flowing across the scene are several paper planes, caught in the gentle yet powerful gusts of Notus. The background is a blend of warm colors, symbolizing the heat of the south, with hints of blue and green to represent the moisture carried by this wind. The overall atmosphere is one of dynamic movement and warmth."/>
</div>
# Model Card for Notux 8x7B-v1
This model is a preference-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) dataset using DPO (Direct Preference Optimization).
As of Dec 26th 2023, it outperforms `Mixtral-8x7B-Instruct-v0.1` and is the top ranked MoE (Mixture of Experts) model on the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
This is part of the Notus family of models and experiments, where the Argilla team investigates data-first and preference tuning methods like dDPO (distilled DPO). This model is the result of our first experiment at tuning a MoE model that has already been fine-tuned with DPO (i.e., Mixtral-8x7B-Instruct-v0.1).
## Model Details
### Model Description
- **Developed by:** Argilla (based on MistralAI previous efforts)
- **Shared by:** Argilla
- **Model type:** Pretrained generative Sparse Mixture of Experts
- **Language(s) (NLP):** English, Spanish, Italian, German, and French
- **License:** MIT
- **Finetuned from model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
### Model Sources
- **Repository:** https://github.com/argilla-io/notus
- **Paper:** N/A
## Training Details
### Training Hardware
We used a VM with 8 x H100 80GB hosted in runpod.io for 1 epoch (~10hr).
### Training Data
We used a new iteration of the Argilla UltraFeedback preferences dataset named [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.4384 | 0.22 | 200 | 0.4556 | -0.3275 | -1.9448 | 0.7937 | 1.6174 | -405.7994 | -397.8617 | -1.3157 | -1.4511 |
| 0.4064 | 0.43 | 400 | 0.4286 | -0.2163 | -2.2090 | 0.8254 | 1.9927 | -408.4409 | -396.7496 | -0.7660 | -0.6539 |
| 0.3952 | 0.65 | 600 | 0.4275 | -0.1311 | -2.1603 | 0.8016 | 2.0291 | -407.9537 | -395.8982 | -0.6783 | -0.7206 |
| 0.3909 | 0.87 | 800 | 0.4167 | -0.2273 | -2.3146 | 0.8135 | 2.0872 | -409.4968 | -396.8602 | -0.8458 | -0.7738 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__notus-8x7b-experiment)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.18|
|AI2 Reasoning Challenge (25-Shot)|70.99|
|HellaSwag (10-Shot) |87.73|
|MMLU (5-Shot) |71.33|
|TruthfulQA (0-shot) |65.79|
|Winogrande (5-shot) |81.61|
|GSM8k (5-shot) |61.64|
|
vince62s/phi-2-psy
|
vince62s
| 2024-03-04T14:52:55Z | 55 | 15 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rhysjones/phi-2-orange",
"cognitivecomputations/dolphin-2_6-phi-2",
"custom_code",
"base_model:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:merge:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:rhysjones/phi-2-orange",
"base_model:merge:rhysjones/phi-2-orange",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T18:14:13Z |
---
license: mit
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- cognitivecomputations/dolphin-2_6-phi-2
base_model:
- rhysjones/phi-2-orange
- cognitivecomputations/dolphin-2_6-phi-2
model-index:
- name: phi-2-psy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 75.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vince62s/phi-2-psy
name: Open LLM Leaderboard
---
# Phi-2-psy
Phi-2-psy is a merge of the following models:
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
## 🏆 Evaluation
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite.
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**phi-2-psy**](https://huggingface.co/vince62s/phi-2-psy)| **34.4**| **71.4**| **48.2**| **38.1**| **48.02**|
|[phixtral-2x2_8](https://huggingface.co/mlabonne/phixtral-2x2_8)| 34.1| 70.4| 48.8| 37.8| 47.78|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.1| 69.9| 47.4| 37.2| 46.89|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| 33.4| 71.3| 49.9| 37.3| 47.97|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 28.0| 70.8| 44.4| 35.2| 44.61|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: rhysjones/phi-2-orange
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2_6-phi-2
layer_range: [0, 32]
merge_method: slerp
base_model: rhysjones/phi-2-orange
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("vince62s/phi-2-psy", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("vince62s/phi-2-psy", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vince62s__phi-2-psy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.80|
|AI2 Reasoning Challenge (25-Shot)|60.84|
|HellaSwag (10-Shot) |75.52|
|MMLU (5-Shot) |57.57|
|TruthfulQA (0-shot) |48.22|
|Winogrande (5-shot) |75.45|
|GSM8k (5-shot) |59.21|
|
UsmanAXAI/whisper-small-ft-client
|
UsmanAXAI
| 2024-03-04T14:52:09Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ar-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:AXAI/client",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-04T12:50:31Z |
---
language:
- ar
license: apache-2.0
tags:
- ar-asr-leaderboard
- generated_from_trainer
datasets:
- AXAI/client
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: Whisper small Ar - AxAI
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Client
type: AXAI/client
config: default
split: None
args: default
metrics:
- type: wer
value: 84.11458333333334
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small Ar - AxAI
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Client dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5990
- Wer: 84.1146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.8044 | 6.37 | 200 | 1.2417 | 69.9219 |
| 0.036 | 12.75 | 400 | 1.1791 | 60.9375 |
| 0.0108 | 19.12 | 600 | 1.3128 | 80.2083 |
| 0.0035 | 25.5 | 800 | 1.3641 | 62.6953 |
| 0.0009 | 31.87 | 1000 | 1.4066 | 66.6016 |
| 0.0004 | 38.25 | 1200 | 1.4410 | 64.5833 |
| 0.0003 | 44.62 | 1400 | 1.4712 | 63.3464 |
| 0.0002 | 51.0 | 1600 | 1.4927 | 63.6068 |
| 0.0002 | 57.37 | 1800 | 1.5102 | 67.1875 |
| 0.0002 | 63.75 | 2000 | 1.5254 | 66.6016 |
| 0.0001 | 70.12 | 2200 | 1.5393 | 77.8646 |
| 0.0001 | 76.49 | 2400 | 1.5512 | 77.9297 |
| 0.0001 | 82.87 | 2600 | 1.5616 | 77.7344 |
| 0.0001 | 89.24 | 2800 | 1.5710 | 83.1380 |
| 0.0001 | 95.62 | 3000 | 1.5791 | 88.0859 |
| 0.0001 | 101.99 | 3200 | 1.5854 | 88.1510 |
| 0.0001 | 108.37 | 3400 | 1.5910 | 88.0859 |
| 0.0001 | 114.74 | 3600 | 1.5953 | 84.1146 |
| 0.0001 | 121.12 | 3800 | 1.5978 | 84.1797 |
| 0.0001 | 127.49 | 4000 | 1.5990 | 84.1146 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
stablediffusionapi/realspicev1
|
stablediffusionapi
| 2024-03-04T14:49:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T14:48:57Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realspicev1"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realspicev1)
Model link: [View model](https://modelslab.com/models/realspicev1)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realspicev1",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Aryanne/WestSenzu-Swap-7B
|
Aryanne
| 2024-03-04T14:45:53Z | 168 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:NeuralNovel/Senzu-7B-v0.1-DPO",
"base_model:merge:NeuralNovel/Senzu-7B-v0.1-DPO",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T01:58:26Z |
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- NeuralNovel/Senzu-7B-v0.1-DPO
- senseable/WestLake-7B-v2
model-index:
- name: WestSenzu-Swap-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/WestSenzu-Swap-7B
name: Open LLM Leaderboard
---
It's experimental, but seems fine for me, I didn't run it deeply yet but should be good for Role-play 😈 considering the two merged models, feel free to leave a suggestion or feedback.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit)(my experimental branch swapping [here](https://github.com/Ar57m/mergekit/tree/swapping) )
## Merge Details
### Merge Method
This model was merged using the task_swapping merge method using [NeuralNovel/Senzu-7B-v0.1-DPO](https://huggingface.co/NeuralNovel/Senzu-7B-v0.1-DPO) as a base.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_swapping
base_model: NeuralNovel/Senzu-7B-v0.1-DPO
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 0.75
diagonal_offset: 2 #it doesn't do anything when you use random_mask
random_mask: 0.3333
random_mask_seed: 98557
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__WestSenzu-Swap-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.28|
|AI2 Reasoning Challenge (25-Shot)|68.34|
|HellaSwag (10-Shot) |85.70|
|MMLU (5-Shot) |64.14|
|TruthfulQA (0-shot) |50.43|
|Winogrande (5-shot) |82.48|
|GSM8k (5-shot) |52.62|
|
Aryanne/Westest-7B
|
Aryanne
| 2024-03-04T14:45:48Z | 158 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:chargoddard/piano-medley-7b",
"base_model:merge:chargoddard/piano-medley-7b",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T04:29:52Z |
---
license: cc-by-sa-4.0
tags:
- mergekit
- merge
base_model:
- chargoddard/piano-medley-7b
- senseable/WestLake-7B-v2
model-index:
- name: Westest-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 86.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the task_anysize merge method using [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: senseable/WestLake-7B-v2
dtype: bfloat16
merge_method: task_anysize
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: chargoddard/piano-medley-7b
parameters:
weight: 0.55
- layer_range: [0, 32]
model:
model:
path: senseable/WestLake-7B-v2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__Westest-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.03|
|AI2 Reasoning Challenge (25-Shot)|72.18|
|HellaSwag (10-Shot) |88.52|
|MMLU (5-Shot) |64.43|
|TruthfulQA (0-shot) |66.72|
|Winogrande (5-shot) |86.58|
|GSM8k (5-shot) |65.73|
|
Aryanne/sheared-plus-westlake-normal
|
Aryanne
| 2024-03-04T14:45:14Z | 1,392 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:09:57Z |
---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: sheared-plus-westlake-normal
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 39.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal
name: Open LLM Leaderboard
---
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
Recipe:
```
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-normal)
| Metric |Value|
|---------------------------------|----:|
|Avg. |41.16|
|AI2 Reasoning Challenge (25-Shot)|39.76|
|HellaSwag (10-Shot) |70.33|
|MMLU (5-Shot) |26.81|
|TruthfulQA (0-shot) |46.50|
|Winogrande (5-shot) |63.54|
|GSM8k (5-shot) | 0.00|
|
Aryanne/TinyllamaMix-1.1B
|
Aryanne
| 2024-03-04T14:45:01Z | 134 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:31:25Z |
---
license: apache-2.0
inference:
parameters:
temperature: 0.79
widget:
- messages:
- role: user
content: How to gain more money?
model-index:
- name: TinyllamaMix-1.1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 31.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 48.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 33.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B
name: Open LLM Leaderboard
---
This a TinyLlama mix merge, experimental, using a custom merge method.
Should be better at RP.
```yaml
merge_method: task_swapping
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k
models:
- model: cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
parameters:
weight: 0.75
diagonal_offset: 5
- model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
parameters:
weight: 0.85
diagonal_offset: 17
invert_offset: True
dtype: bfloat16
name: bye
---
merge_method: task_swapping
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
models:
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
weight: 0.8
diagonal_offset: 3
invert_offset: False
dtype: bfloat16
name: hello
---
merge_method: task_arithmetic
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k
models:
- model: hello
parameters:
weight: 0.66
- model: bye+Anarchist/PIPPA_LORA_TinyLlama
parameters:
weight: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__TinyllamaMix-1.1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |32.99|
|AI2 Reasoning Challenge (25-Shot)|31.48|
|HellaSwag (10-Shot) |48.39|
|MMLU (5-Shot) |25.05|
|TruthfulQA (0-shot) |33.45|
|Winogrande (5-shot) |58.48|
|GSM8k (5-shot) | 1.06|
|
BiQiB7/redVintage
|
BiQiB7
| 2024-03-04T14:44:34Z | 0 | 1 | null |
[
"region:us"
] | null | 2024-03-02T14:37:34Z |
redVintage instructions - LoRA DreamBooth SDXL
trigger word: v1nta93r3dxtyle
Weight of 1.5 works well on the SDXL 1.0 base.
Vintage style poster with hints of red and red soda.

"(v1nta93r3dxtyle:1.5) poster, cat sitting beside the beach, high quality"

"(v1nta93r3dxtyle:2.0) poster,cat beside soda, high quality"

"(v1nta93r3dxtyle:2.0) poster, man sitting beside the beach, high quality"

"(v1nta93r3dxtyle:2.0) poster, woman sitting beside the beach, soda"
|
Aryanne/sheared-plus-westlake-50_75p
|
Aryanne
| 2024-03-04T14:44:26Z | 1,369 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:42:25Z |
---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: sheared-plus-westlake-50_75p
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
---
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
In this I changed something, to have more Westlake.
Recipe:
```
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-50_75p)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.31|
|AI2 Reasoning Challenge (25-Shot)|34.04|
|HellaSwag (10-Shot) |58.05|
|MMLU (5-Shot) |26.24|
|TruthfulQA (0-shot) |42.64|
|Winogrande (5-shot) |56.91|
|GSM8k (5-shot) | 0.00|
|
Aryanne/sheared-plus-westlake-nearest-50_75p
|
Aryanne
| 2024-03-04T14:44:18Z | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T16:52:32Z |
---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: sheared-plus-westlake-nearest-50_75p
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 57.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
---
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
In this I changed something, to have more Westlake.
Recipe:
```
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-nearest-50_75p)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.18|
|AI2 Reasoning Challenge (25-Shot)|36.18|
|HellaSwag (10-Shot) |57.54|
|MMLU (5-Shot) |24.20|
|TruthfulQA (0-shot) |42.39|
|Winogrande (5-shot) |56.75|
|GSM8k (5-shot) | 0.00|
|
adambiel22/calculator_model_test
|
adambiel22
| 2024-03-04T14:43:20Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-04T14:17:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4753 | 1.0 | 13 | 1.7336 |
| 1.3843 | 2.0 | 26 | 0.9380 |
| 0.8281 | 3.0 | 39 | 0.7141 |
| 0.6509 | 4.0 | 52 | 0.5507 |
| 0.5643 | 5.0 | 65 | 0.5065 |
| 0.5155 | 6.0 | 78 | 0.4642 |
| 0.445 | 7.0 | 91 | 0.3848 |
| 0.3912 | 8.0 | 104 | 0.3584 |
| 0.3564 | 9.0 | 117 | 0.3142 |
| 0.3175 | 10.0 | 130 | 0.2663 |
| 0.2765 | 11.0 | 143 | 0.2291 |
| 0.2436 | 12.0 | 156 | 0.1915 |
| 0.2186 | 13.0 | 169 | 0.1556 |
| 0.1646 | 14.0 | 182 | 0.1097 |
| 0.1504 | 15.0 | 195 | 0.0862 |
| 0.113 | 16.0 | 208 | 0.0725 |
| 0.0864 | 17.0 | 221 | 0.0500 |
| 0.0722 | 18.0 | 234 | 0.0425 |
| 0.0654 | 19.0 | 247 | 0.0389 |
| 0.0581 | 20.0 | 260 | 0.0312 |
| 0.052 | 21.0 | 273 | 0.0284 |
| 0.0459 | 22.0 | 286 | 0.0275 |
| 0.0426 | 23.0 | 299 | 0.0213 |
| 0.0381 | 24.0 | 312 | 0.0200 |
| 0.0374 | 25.0 | 325 | 0.0186 |
| 0.0334 | 26.0 | 338 | 0.0185 |
| 0.0307 | 27.0 | 351 | 0.0157 |
| 0.0292 | 28.0 | 364 | 0.0147 |
| 0.0269 | 29.0 | 377 | 0.0146 |
| 0.0261 | 30.0 | 390 | 0.0129 |
| 0.0248 | 31.0 | 403 | 0.0121 |
| 0.0237 | 32.0 | 416 | 0.0114 |
| 0.023 | 33.0 | 429 | 0.0115 |
| 0.0217 | 34.0 | 442 | 0.0109 |
| 0.0208 | 35.0 | 455 | 0.0101 |
| 0.0205 | 36.0 | 468 | 0.0102 |
| 0.0197 | 37.0 | 481 | 0.0098 |
| 0.0193 | 38.0 | 494 | 0.0096 |
| 0.019 | 39.0 | 507 | 0.0094 |
| 0.0188 | 40.0 | 520 | 0.0095 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
WenjunJi/Hello
|
WenjunJi
| 2024-03-04T14:41:03Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2024-03-04T14:16:20Z |
---
'[object Object]': null
---
# Model Card for {{ model_id | default("Model ID", true) }}
<!-- Provide a quick summary of what the model is/does. -->
{{ model_summary | default("", true) }}
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
{{ model_description | default("", true) }}
- **Developed by:** {{ developers | default("[More Information Needed]", true)}}
- **Funded by [optional]:** {{ funded_by | default("[More Information Needed]", true)}}
- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}}
- **Model type:** {{ model_type | default("[More Information Needed]", true)}}
- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
- **License:** {{ license | default("[More Information Needed]", true)}}
- **Finetuned from model [optional]:** {{ base_model | default("[More Information Needed]", true)}}
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** {{ repo | default("[More Information Needed]", true)}}
- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}}
- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
{{ direct_use | default("[More Information Needed]", true)}}
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
{{ downstream_use | default("[More Information Needed]", true)}}
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
{{ out_of_scope_use | default("[More Information Needed]", true)}}
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
## How to Get Started with the Model
Use the code below to get started with the model.
{{ get_started_code | default("[More Information Needed]", true)}}
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
{{ training_data | default("[More Information Needed]", true)}}
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
{{ preprocessing | default("[More Information Needed]", true)}}
#### Training Hyperparameters
- **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
{{ speeds_sizes_times | default("[More Information Needed]", true)}}
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
{{ testing_data | default("[More Information Needed]", true)}}
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
{{ testing_factors | default("[More Information Needed]", true)}}
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
{{ testing_metrics | default("[More Information Needed]", true)}}
### Results
{{ results | default("[More Information Needed]", true)}}
#### Summary
{{ results_summary | default("", true) }}
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
{{ model_examination | default("[More Information Needed]", true)}}
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** {{ hardware_type | default("[More Information Needed]", true)}}
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
## Technical Specifications [optional]
### Model Architecture and Objective
{{ model_specs | default("[More Information Needed]", true)}}
### Compute Infrastructure
{{ compute_infrastructure | default("[More Information Needed]", true)}}
#### Hardware
{{ hardware_requirements | default("[More Information Needed]", true)}}
#### Software
{{ software | default("[More Information Needed]", true)}}
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
{{ glossary | default("[More Information Needed]", true)}}
## More Information [optional]
{{ more_information | default("[More Information Needed]", true)}}
## Model Card Authors [optional]
{{ model_card_authors | default("[More Information Needed]", true)}}
## Model Card Contact
{{ model_card_contact | default("[More Information Needed]", true)}}
|
TeeZee/Kyllene-57B-v1.0
|
TeeZee
| 2024-03-04T14:38:50Z | 67 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T03:40:58Z |
---
license: other
tags:
- merge
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
model-index:
- name: Kyllene-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0
name: Open LLM Leaderboard
---
# Kyllene 57B v1.0

## Model Details
- A result of using SLERP merge of [TeeZee/2xbagel-dpo-34b-v0.2](https://huggingface.co/TeeZee/2xbagel-dpo-34b-v0.2) and [TeeZee/2xNous-Capybara-34B](https://huggingface.co/TeeZee/2xNous-Capybara-34B).
- The resulting model has 100 layers and approximately 57 billion parameters.
- 200K context length
- See [mergekit-config.yml](https://huggingface.co/TeeZee/Kyllene-v1.0/blob/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- good at following instructions
- pretty close to my ideal RPG/ERP everyday model (but more tests are needed)
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__Kyllene-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.86|
|AI2 Reasoning Challenge (25-Shot)|64.85|
|HellaSwag (10-Shot) |84.51|
|MMLU (5-Shot) |73.33|
|TruthfulQA (0-shot) |57.89|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |30.40|
|
manojpreveen/distilbart-cnn-v3
|
manojpreveen
| 2024-03-04T14:37:57Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"rust",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-03-04T14:18:09Z |
---
language: en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
### Service Deployment Code :
https://github.com/manojpreveen/Summarization-Service
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:------------------------------------|------------:|----------------------:|----------:|----------:|----------:|
| facebook/bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| manojpreveen/distilbart-cnn-v3 | 306 | 307 | 1.24 | 21.26 | 30.59 |
| manojpreveen/distilbart-cnn-v2 | 255 | 214 | 1.78 | 20.57 | 30.00 |
| manojpreveen/distilbart-cnn-v1 | 230 | 182 | 2.09 | 20.17 | 29.70 |
|
ludziej/calculator_model_test
|
ludziej
| 2024-03-04T14:36:00Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-27T13:30:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9786 | 1.0 | 6 | 2.2468 |
| 2.0116 | 2.0 | 12 | 1.7255 |
| 1.5628 | 3.0 | 18 | 1.3183 |
| 1.2273 | 4.0 | 24 | 1.0790 |
| 1.0219 | 5.0 | 30 | 0.9549 |
| 0.9147 | 6.0 | 36 | 0.8300 |
| 0.7993 | 7.0 | 42 | 0.7326 |
| 0.7256 | 8.0 | 48 | 0.7028 |
| 0.6677 | 9.0 | 54 | 0.6345 |
| 0.6369 | 10.0 | 60 | 0.5926 |
| 0.5883 | 11.0 | 66 | 0.5722 |
| 0.5506 | 12.0 | 72 | 0.5835 |
| 0.5504 | 13.0 | 78 | 0.5491 |
| 0.5232 | 14.0 | 84 | 0.5165 |
| 0.4977 | 15.0 | 90 | 0.4719 |
| 0.4754 | 16.0 | 96 | 0.4910 |
| 0.4706 | 17.0 | 102 | 0.4623 |
| 0.4659 | 18.0 | 108 | 0.4202 |
| 0.4186 | 19.0 | 114 | 0.3720 |
| 0.385 | 20.0 | 120 | 0.3768 |
| 0.3692 | 21.0 | 126 | 0.3751 |
| 0.3775 | 22.0 | 132 | 0.3508 |
| 0.3359 | 23.0 | 138 | 0.3427 |
| 0.3412 | 24.0 | 144 | 0.3015 |
| 0.3128 | 25.0 | 150 | 0.2826 |
| 0.2892 | 26.0 | 156 | 0.2688 |
| 0.28 | 27.0 | 162 | 0.2512 |
| 0.2653 | 28.0 | 168 | 0.2385 |
| 0.255 | 29.0 | 174 | 0.2275 |
| 0.249 | 30.0 | 180 | 0.2108 |
| 0.2316 | 31.0 | 186 | 0.1894 |
| 0.2184 | 32.0 | 192 | 0.1777 |
| 0.2057 | 33.0 | 198 | 0.1644 |
| 0.186 | 34.0 | 204 | 0.1527 |
| 0.1853 | 35.0 | 210 | 0.1484 |
| 0.1721 | 36.0 | 216 | 0.1380 |
| 0.165 | 37.0 | 222 | 0.1336 |
| 0.1544 | 38.0 | 228 | 0.1299 |
| 0.153 | 39.0 | 234 | 0.1288 |
| 0.158 | 40.0 | 240 | 0.1268 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mharnsingh/DS_midterm
|
mharnsingh
| 2024-03-04T14:34:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-29T14:47:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PawelNarcos/calculator_model_test
|
PawelNarcos
| 2024-03-04T14:34:04Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-04T13:51:18Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3492 | 1.0 | 11 | 1.8446 |
| 1.6854 | 2.0 | 22 | 1.3965 |
| 1.1935 | 3.0 | 33 | 0.8348 |
| 0.5028 | 4.0 | 44 | 0.3122 |
| 0.2788 | 5.0 | 55 | 0.2400 |
| 0.2212 | 6.0 | 66 | 0.1893 |
| 0.1813 | 7.0 | 77 | 0.1544 |
| 0.1515 | 8.0 | 88 | 0.1281 |
| 0.1206 | 9.0 | 99 | 0.0903 |
| 0.087 | 10.0 | 110 | 0.0571 |
| 0.058 | 11.0 | 121 | 0.0359 |
| 0.0378 | 12.0 | 132 | 0.0204 |
| 0.0249 | 13.0 | 143 | 0.0126 |
| 0.0169 | 14.0 | 154 | 0.0085 |
| 0.0123 | 15.0 | 165 | 0.0061 |
| 0.009 | 16.0 | 176 | 0.0047 |
| 0.0073 | 17.0 | 187 | 0.0037 |
| 0.006 | 18.0 | 198 | 0.0031 |
| 0.0049 | 19.0 | 209 | 0.0024 |
| 0.0043 | 20.0 | 220 | 0.0023 |
| 0.0037 | 21.0 | 231 | 0.0020 |
| 0.0033 | 22.0 | 242 | 0.0019 |
| 0.0032 | 23.0 | 253 | 0.0018 |
| 0.003 | 24.0 | 264 | 0.0016 |
| 0.0025 | 25.0 | 275 | 0.0015 |
| 0.0024 | 26.0 | 286 | 0.0014 |
| 0.0023 | 27.0 | 297 | 0.0014 |
| 0.0022 | 28.0 | 308 | 0.0014 |
| 0.0021 | 29.0 | 319 | 0.0013 |
| 0.002 | 30.0 | 330 | 0.0012 |
| 0.0019 | 31.0 | 341 | 0.0012 |
| 0.0018 | 32.0 | 352 | 0.0012 |
| 0.0018 | 33.0 | 363 | 0.0012 |
| 0.0017 | 34.0 | 374 | 0.0011 |
| 0.0018 | 35.0 | 385 | 0.0011 |
| 0.0018 | 36.0 | 396 | 0.0011 |
| 0.0017 | 37.0 | 407 | 0.0011 |
| 0.0016 | 38.0 | 418 | 0.0011 |
| 0.0016 | 39.0 | 429 | 0.0011 |
| 0.0015 | 40.0 | 440 | 0.0011 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
luqmanxyz/Maya_Hermes-2.5-Mistral-7B
|
luqmanxyz
| 2024-03-04T14:33:15Z | 1,353 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T00:51:55Z |
---
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
model-index:
- name: Maya_Hermes-2.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a DPO finetuned variation of https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B using the argilla/distilabel-intel-orca-dpo-pairs
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_luqmanxyz__Maya_Hermes-2.5-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.60|
|AI2 Reasoning Challenge (25-Shot)|66.30|
|HellaSwag (10-Shot) |85.07|
|MMLU (5-Shot) |63.23|
|TruthfulQA (0-shot) |55.89|
|Winogrande (5-shot) |78.85|
|GSM8k (5-shot) |62.24|
|
rfdv/dummy-model-2
|
rfdv
| 2024-03-04T14:32:42Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-04T14:28:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
paulilioaica/Hugo-7B-slerp
|
paulilioaica
| 2024-03-04T14:30:01Z | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"conversational",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:merge:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T14:24:28Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- beowolx/CodeNinja-1.0-OpenChat-7B
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- beowolx/CodeNinja-1.0-OpenChat-7B
model-index:
- name: Hugo-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=paulilioaica/Hugo-7B-slerp
name: Open LLM Leaderboard
---
# Hugo-7B-slerp
<p align="center">
<img src="https://cdn.openart.ai/stable_diffusion/54be6f0516fee5ce9b3f8a8b68620a05059fc4cf_2000x2000.webp" alt="alt text" class="center" width="300"/>
</p>
Hugo-7B-slerp is a successful merge of the following models using mergekit:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: beowolx/CodeNinja-1.0-OpenChat-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 📈 Performance
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [paulilioaica/Hugo-7B-slerp](#) | **67.07** | **64.51** | 84.77 | **62.54** | 57.13 | **80.03** | 53.45 |
| [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 65.71 | 63.14 | 84.88 | 60.78 | 68.26 | 77.19 | 40.03 |
| [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) | 67.4 | 63.48 | 83.65 | 63.77 | 47.16 | 79.79 | 66.57 |
With bold one can see the benchmarks where this merge overtakes the basemodel in performance.
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulilioaica/Hugo-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"conversational",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(messages, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs)
```
## 🛈 More on megekit
[mergekit](https://huggingface.co/blog/mlabonne/merge-models)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_paulilioaica__Hugo-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.07|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.77|
|MMLU (5-Shot) |62.54|
|TruthfulQA (0-shot) |57.13|
|Winogrande (5-shot) |80.03|
|GSM8k (5-shot) |53.45|
|
Azzedde/mistral-lora-chess
|
Azzedde
| 2024-03-04T14:29:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-04T14:29:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vengeance12/Contineu
|
Vengeance12
| 2024-03-04T14:27:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-04T14:26:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GIZ/VULNERABILITY-multilabel-mpnet
|
GIZ
| 2024-03-04T14:25:18Z | 54 | 1 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"region:us"
] |
text-classification
| 2024-02-26T14:24:06Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Implementing the reform required strong support from all ministries involved.
A major effort was required to present the conceptual change to car importers,
politicians and the public. A great deal was also invested in public relations
to describe the benefits of the tax, which by many was perceived as yet another
attempt to increase tax revenues. A number of the most popular car models’ prices
were about to increase – mostly large family, luxury and sport cars – but for
many models, the retail price was actually reduced.
- text: Workers in the formal sector. Formal sector workers also face economic risks.
A number of them experience income instability due to contractualization, retrenchment,
and firm closures. In 2014, contractual workers accounted for 22 percent of the
total 4.5 million workers employed in establishments with 20 or more employees.
- text: Building additional dams and power stations to further develop energy generation
potential from the same river flow as well as develop new dam sites on parallel
rivers in order to maintain the baseline hydropower electricity generation capacity
to levels attainable under a ‘no-climate change’ scenario. Developing and implementing
climate change compatible building/construction codes for buildings, roads, airports,
airfields, dry ports, railways, bridges, dams and irrigation canals that are safe
for human life and minimize economic damage that is likely to result from increasing
extremes in flooding.
- text: Another factor that increases farmer vulnerability is the remoteness of farm
villages and lack of adequate road infrastructure. Across the three regions, roads
are in a poor state and unevenly distributed, with many villages lacking roads
that connect them to other villages. Even the main roads are often accessible
only during the dry season. The livelihood implications of this isolation are
significant, as farmers have difficulties getting their products to markets as
well as obtaining agricultural inputs; in addition, farmers generally have to
pay higher prices for agricultural inputs in remote areas, reducing their profit
margins
- text: This project aims to construct a desalination plant in the capital city in
order to respond directly to drinking water supply needs. This new plant, which
will have a capacity of 22,500 m3 daily, easily expandable to 45,000 m3, will
be fuelled by renewable energy, which is expected to be provided by a wind farm
planned for the second phase of the project. Funding: European Union. Rural Community
Development and Water Mobilization Project (PRODERMO).
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 18 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("leavoigt/vulnerability_multilabel_updated")
# Run inference
preds = model("Workers in the formal sector. Formal sector workers also face economic risks. A number of them experience income instability due to contractualization, retrenchment, and firm closures. In 2014, contractual workers accounted for 22 percent of the total 4.5 million workers employed in establishments with 20 or more employees.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 21 | 72.6472 | 238 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 0)
- max_steps: -1
- sampling_strategy: undersampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.1906 | - |
| 0.0316 | 50 | 0.1275 | 0.1394 |
| 0.0631 | 100 | 0.0851 | 0.1247 |
| 0.0947 | 150 | 0.0959 | 0.1269 |
| 0.1263 | 200 | 0.1109 | 0.1179 |
| 0.1578 | 250 | 0.0923 | 0.1354 |
| 0.1894 | 300 | 0.063 | 0.1292 |
| 0.2210 | 350 | 0.0555 | 0.1326 |
| 0.2525 | 400 | 0.0362 | 0.1127 |
| 0.2841 | 450 | 0.0582 | 0.132 |
| 0.3157 | 500 | 0.0952 | 0.1339 |
| 0.3472 | 550 | 0.0793 | 0.1171 |
| 0.3788 | 600 | 0.059 | 0.1187 |
| 0.4104 | 650 | 0.0373 | 0.1131 |
| 0.4419 | 700 | 0.0593 | 0.1144 |
| 0.4735 | 750 | 0.0405 | 0.1174 |
| 0.5051 | 800 | 0.0284 | 0.1196 |
| 0.5366 | 850 | 0.0329 | 0.1116 |
| 0.5682 | 900 | 0.0895 | 0.1193 |
| 0.5997 | 950 | 0.0576 | 0.1159 |
| 0.6313 | 1000 | 0.0385 | 0.1203 |
| 0.6629 | 1050 | 0.0842 | 0.1195 |
| 0.6944 | 1100 | 0.0274 | 0.113 |
| 0.7260 | 1150 | 0.0226 | 0.1137 |
| 0.7576 | 1200 | 0.0276 | 0.1204 |
| 0.7891 | 1250 | 0.0355 | 0.1163 |
| 0.8207 | 1300 | 0.077 | 0.1161 |
| 0.8523 | 1350 | 0.0735 | 0.1135 |
| 0.8838 | 1400 | 0.0357 | 0.1175 |
| 0.9154 | 1450 | 0.0313 | 0.1207 |
| 0.9470 | 1500 | 0.0241 | 0.1159 |
| 0.9785 | 1550 | 0.0339 | 0.1161 |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.3.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
rmalinowska/calculator_model_test
|
rmalinowska
| 2024-03-04T14:24:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-04T14:13:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.462 | 1.0 | 5 | 2.8547 |
| 2.538 | 2.0 | 10 | 2.0897 |
| 1.9404 | 3.0 | 15 | 1.7253 |
| 1.7165 | 4.0 | 20 | 1.6144 |
| 1.6205 | 5.0 | 25 | 1.5616 |
| 1.5595 | 6.0 | 30 | 1.5343 |
| 1.533 | 7.0 | 35 | 1.5129 |
| 1.5116 | 8.0 | 40 | 1.4738 |
| 1.47 | 9.0 | 45 | 1.4300 |
| 1.4339 | 10.0 | 50 | 1.4170 |
| 1.4026 | 11.0 | 55 | 1.3703 |
| 1.3849 | 12.0 | 60 | 1.3283 |
| 1.3616 | 13.0 | 65 | 1.2966 |
| 1.3063 | 14.0 | 70 | 1.2537 |
| 1.2479 | 15.0 | 75 | 1.1862 |
| 1.1756 | 16.0 | 80 | 1.1101 |
| 1.1533 | 17.0 | 85 | 1.1242 |
| 1.1344 | 18.0 | 90 | 1.1090 |
| 1.0864 | 19.0 | 95 | 1.0031 |
| 1.0175 | 20.0 | 100 | 0.9584 |
| 0.9662 | 21.0 | 105 | 0.9226 |
| 0.9294 | 22.0 | 110 | 0.8753 |
| 0.8923 | 23.0 | 115 | 0.8596 |
| 0.8676 | 24.0 | 120 | 0.8074 |
| 0.8371 | 25.0 | 125 | 0.7753 |
| 0.8185 | 26.0 | 130 | 0.7829 |
| 0.8092 | 27.0 | 135 | 0.7452 |
| 0.7693 | 28.0 | 140 | 0.7231 |
| 0.7534 | 29.0 | 145 | 0.7025 |
| 0.7432 | 30.0 | 150 | 0.6977 |
| 0.7249 | 31.0 | 155 | 0.6820 |
| 0.7161 | 32.0 | 160 | 0.6609 |
| 0.703 | 33.0 | 165 | 0.6529 |
| 0.6926 | 34.0 | 170 | 0.6395 |
| 0.6771 | 35.0 | 175 | 0.6246 |
| 0.667 | 36.0 | 180 | 0.6204 |
| 0.6633 | 37.0 | 185 | 0.6057 |
| 0.6498 | 38.0 | 190 | 0.6014 |
| 0.642 | 39.0 | 195 | 0.5947 |
| 0.6433 | 40.0 | 200 | 0.5921 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Kemal12/model1233
|
Kemal12
| 2024-03-04T14:24:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-04T14:23:57Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: model1233
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model1233
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
vicgalle/TruthfulQwen1.5-1.8B
|
vicgalle
| 2024-03-04T14:24:14Z | 48 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T19:55:14Z |
---
library_name: transformers
license: apache-2.0
datasets:
- jondurbin/truthy-dpo-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LordAbsurd2137/calculator_model_test
|
LordAbsurd2137
| 2024-03-04T14:19:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-04T14:18:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3688 | 1.0 | 6 | 2.7446 |
| 2.3519 | 2.0 | 12 | 2.0185 |
| 1.8781 | 3.0 | 18 | 1.7058 |
| 1.6266 | 4.0 | 24 | 1.6304 |
| 1.614 | 5.0 | 30 | 1.5637 |
| 1.6233 | 6.0 | 36 | 1.5529 |
| 1.5566 | 7.0 | 42 | 1.5884 |
| 1.5389 | 8.0 | 48 | 1.5444 |
| 1.5121 | 9.0 | 54 | 1.5489 |
| 1.5376 | 10.0 | 60 | 1.5411 |
| 1.532 | 11.0 | 66 | 1.5396 |
| 1.5576 | 12.0 | 72 | 1.5405 |
| 1.5914 | 13.0 | 78 | 1.5442 |
| 1.5226 | 14.0 | 84 | 1.5356 |
| 1.5129 | 15.0 | 90 | 1.5327 |
| 1.5037 | 16.0 | 96 | 1.5333 |
| 1.5191 | 17.0 | 102 | 1.5379 |
| 1.4822 | 18.0 | 108 | 1.5329 |
| 1.5205 | 19.0 | 114 | 1.5272 |
| 1.6166 | 20.0 | 120 | 1.5257 |
| 1.5019 | 21.0 | 126 | 1.5286 |
| 1.5512 | 22.0 | 132 | 1.5284 |
| 1.5532 | 23.0 | 138 | 1.5302 |
| 1.5016 | 24.0 | 144 | 1.5287 |
| 1.4958 | 25.0 | 150 | 1.5302 |
| 1.4917 | 26.0 | 156 | 1.5260 |
| 1.5 | 27.0 | 162 | 1.5249 |
| 1.5572 | 28.0 | 168 | 1.5272 |
| 1.5552 | 29.0 | 174 | 1.5259 |
| 1.545 | 30.0 | 180 | 1.5121 |
| 1.519 | 31.0 | 186 | 1.5229 |
| 1.5074 | 32.0 | 192 | 1.5117 |
| 1.4766 | 33.0 | 198 | 1.4902 |
| 1.4597 | 34.0 | 204 | 1.4815 |
| 1.4488 | 35.0 | 210 | 1.4592 |
| 1.4401 | 36.0 | 216 | 1.4433 |
| 1.4376 | 37.0 | 222 | 1.4322 |
| 1.3936 | 38.0 | 228 | 1.4231 |
| 1.4553 | 39.0 | 234 | 1.4220 |
| 1.4191 | 40.0 | 240 | 1.4131 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.