modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
muzammil-eds/Tinyllama-Chat-v0.4-OpenOrca
|
muzammil-eds
| 2024-03-04T14:12:52Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T14:08:47Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LN1996/output_run_2
|
LN1996
| 2024-03-04T14:10:40Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-04T13:51:59Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a room with professional interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - LN1996/output_run_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a room with professional interior design using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mayacinka/yam-jom-7B-ties
|
mayacinka
| 2024-03-04T14:08:08Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"yam-peleg/Experiment26-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-03T05:37:41Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
model-index:
- name: yam-jom-7B-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.51
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
---
# yam-jom-7B-ties
yam-jom-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
parameters:
weight: 0.35
- model: yam-peleg/Experiment26-7B
parameters:
weight: 0.65
base_model: yam-peleg/Experiment26-7B
merge_method: ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/yam-jom-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B-ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.44|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |77.51|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |69.60|
|
kaitchup/Llama-2-7b-mt-German-to-English
|
kaitchup
| 2024-03-04T14:07:29Z | 6 | 3 |
peft
|
[
"peft",
"translation",
"en",
"de",
"dataset:kaitchup/opus-German-to-English",
"license:mit",
"region:us"
] |
translation
| 2023-10-26T16:48:38Z |
---
library_name: peft
license: mit
language:
- en
- de
datasets:
- kaitchup/opus-German-to-English
tags:
- translation
---
# Model Card for Model ID
This is an adapter for Meta's Llama 2 7B fine-tuned for translating German text into English.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** LoRA Adapter for Llama 2 7B
- **Language(s) (NLP):** German, English
- **License:** MIT license
## Uses
This adapter must be loaded on top of Llama 2 7B. It has been fine-tuned with QLoRA. For optimal results, the base model must be loaded with the exact same configuration used during fine-tuning.
You can use the following code to load the model:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
from peft import PeftModel
base_model = "meta-llama/Llama-2-7b-hf"
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
base_model, device_map={"": 0}, quantization_config=bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True)
model = PeftModel.from_pretrained(model, "kaitchup/Llama-2-7b-mt-German-to-English")
```
Then, run the model as follows:
```
my_text = "" #put your text to translate here
prompt = my_text+" ###>"
tokenized_input = tokenizer(prompt, return_tensors="pt")
input_ids = tokenized_input["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
num_beams=10,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=130
)
for seq in generation_output.sequences:
output = tokenizer.decode(seq, skip_special_tokens=True)
print(output.split("###>")[1].strip())
```
## Model Card Contact
[The Kaitchup](https://kaitchup.substack.com/)
|
rfdv/bert-fine-tuned-cola
|
rfdv
| 2024-03-04T14:07:26Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-04T13:24:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
base_model: bert-base-cased
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Matthews Correlation: 0.5843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.467 | 1.0 | 1069 | 0.4864 | 0.5078 |
| 0.335 | 2.0 | 2138 | 0.5424 | 0.6041 |
| 0.2123 | 3.0 | 3207 | 0.7721 | 0.5843 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mayacinka/NeuralZephyr-Beagle-7B
|
mayacinka
| 2024-03-04T14:06:16Z | 16 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:CultriX/NeuralTrix-7B-dpo",
"base_model:finetune:CultriX/NeuralTrix-7B-dpo",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T17:15:00Z |
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model: CultriX/NeuralTrix-7B-dpo
model-index:
- name: NeuralZephyr-Beagle-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B
name: Open LLM Leaderboard
---

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Code credit: [this excellent medium blog](https://medium.com/towards-data-science/merge-large-language-models-with-mergekit-2118fb392b54)
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using CultriX/NeuralTrix-7B-dpo as a base.
### Models Merged
The following models were included in the merge:
* mlabonne/NeuralBeagle14-7B
* HuggingFaceH4/zephyr-7b-alpha
### Benchmarks
#### Open LLM Leaderboard
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| ------------------------------ | ------- | ---- | --------- | ----- | ---------- | ---------- | ----- |
| mayacinka/NeuralZephyr-Beagle-7B | 71.57 | 68.6 | 86.38 | 64.67 | 65.17 | 81.14 | 63.46 |
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: CultriX/NeuralTrix-7B-dpo
- model: HuggingFaceH4/zephyr-7b-alpha
parameters:
density: 0.83
weight: 0.4
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.83
weight: 0.6
merge_method: dare_ties
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
int8_mask: true
dtype: bfloat16
```
# Inference
```python
# pip install transformers
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/NeuralZephyr-Beagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__NeuralZephyr-Beagle-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.57|
|AI2 Reasoning Challenge (25-Shot)|68.60|
|HellaSwag (10-Shot) |86.38|
|MMLU (5-Shot) |64.67|
|TruthfulQA (0-shot) |65.17|
|Winogrande (5-shot) |81.14|
|GSM8k (5-shot) |63.46|
|
mayacinka/ramonda-7b-dpo-ties
|
mayacinka
| 2024-03-04T14:06:05Z | 137 | 10 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OGNO-7B",
"bardsai/jaskier-7b-dpo-v4.3",
"base_model:bardsai/jaskier-7b-dpo-v4.3",
"base_model:merge:bardsai/jaskier-7b-dpo-v4.3",
"base_model:paulml/OGNO-7B",
"base_model:merge:paulml/OGNO-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-17T21:22:39Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- paulml/OGNO-7B
- bardsai/jaskier-7b-dpo-v4.3
base_model:
- paulml/OGNO-7B
- bardsai/jaskier-7b-dpo-v4.3
model-index:
- name: Buttercup-7b-dpo-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties
name: Open LLM Leaderboard
---

# ramonda-7b-dpo-ties
ramonda-7b-dpo-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B)
* [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3)
## Benchmark
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|------------------------|--------:|-----:|----------:|-----:|-----------:|-----------:|------:|
| mayacinka/ramonda-7b-dpo-ties | 76.19 | 72.7 | 89.69| 64.5 | 77.17 | 84.77 | 68.92|
[LLM AutoEval](https://gist.github.com/majacinka/370282a808a21b28bacd2c76a998da8f)
| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|----------------------|---------|---------|------------|----------|---------|
| ramonda-7b-dpo-ties | 44.67 | 77.16 | 77.6 | 49.06 | 62.12 |
## 🧩 Configuration
```yaml
models:
- model: bardsai/jaskier-7b-dpo-v5.6
# no parameters necessary for base model
- model: paulml/OGNO-7B
parameters:
density: 0.9
weight: 0.5
- model: bardsai/jaskier-7b-dpo-v4.3
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: bardsai/jaskier-7b-dpo-v5.6
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/ramonda-7b-dpo-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__Buttercup-7b-dpo-ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.19|
|AI2 Reasoning Challenge (25-Shot)|72.70|
|HellaSwag (10-Shot) |89.09|
|MMLU (5-Shot) |64.50|
|TruthfulQA (0-shot) |77.17|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |68.92|
|
salohnana2018/ABSA-SentencePair-DAPT-HARDARABS-bert-base-Camel-MSA-ru2
|
salohnana2018
| 2024-03-04T14:06:00Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"base_model:finetune:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-04T12:50:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
base_model: salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD
model-index:
- name: ABSA-SentencePair-DAPT-HARDARABS-bert-base-Camel-MSA-ru2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABSA-SentencePair-DAPT-HARDARABS-bert-base-Camel-MSA-ru2
This model is a fine-tuned version of [salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD](https://huggingface.co/salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7587
- Accuracy: 0.8941
- F1: 0.8941
- Precision: 0.8941
- Recall: 0.8941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5254 | 1.0 | 265 | 0.4268 | 0.8483 | 0.8483 | 0.8483 | 0.8483 |
| 0.3572 | 2.0 | 530 | 0.3457 | 0.8563 | 0.8563 | 0.8563 | 0.8563 |
| 0.2477 | 3.0 | 795 | 0.5427 | 0.8795 | 0.8795 | 0.8795 | 0.8795 |
| 0.1905 | 4.0 | 1060 | 0.8314 | 0.8899 | 0.8899 | 0.8899 | 0.8899 |
| 0.1353 | 5.0 | 1325 | 1.0504 | 0.8852 | 0.8852 | 0.8852 | 0.8852 |
| 0.12 | 6.0 | 1590 | 0.7891 | 0.8842 | 0.8842 | 0.8842 | 0.8842 |
| 0.0749 | 7.0 | 1855 | 1.3696 | 0.8894 | 0.8894 | 0.8894 | 0.8894 |
| 0.097 | 8.0 | 2120 | 0.9817 | 0.8904 | 0.8904 | 0.8904 | 0.8904 |
| 0.0624 | 9.0 | 2385 | 1.0450 | 0.8847 | 0.8847 | 0.8847 | 0.8847 |
| 0.0582 | 10.0 | 2650 | 1.3148 | 0.8970 | 0.8970 | 0.8970 | 0.8970 |
| 0.0599 | 11.0 | 2915 | 1.4069 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0451 | 12.0 | 3180 | 1.0183 | 0.8889 | 0.8889 | 0.8889 | 0.8889 |
| 0.0309 | 13.0 | 3445 | 1.3034 | 0.8932 | 0.8932 | 0.8932 | 0.8932 |
| 0.0251 | 14.0 | 3710 | 1.5148 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0245 | 15.0 | 3975 | 1.5136 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0153 | 16.0 | 4240 | 1.3876 | 0.8927 | 0.8927 | 0.8927 | 0.8927 |
| 0.0161 | 17.0 | 4505 | 1.6176 | 0.8885 | 0.8885 | 0.8885 | 0.8885 |
| 0.0166 | 18.0 | 4770 | 1.6110 | 0.8937 | 0.8937 | 0.8937 | 0.8937 |
| 0.0137 | 19.0 | 5035 | 1.7113 | 0.8960 | 0.8960 | 0.8960 | 0.8960 |
| 0.0111 | 20.0 | 5300 | 1.7241 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0101 | 21.0 | 5565 | 1.6722 | 0.8970 | 0.8970 | 0.8970 | 0.8970 |
| 0.0142 | 22.0 | 5830 | 1.6423 | 0.8904 | 0.8904 | 0.8904 | 0.8904 |
| 0.0118 | 23.0 | 6095 | 1.6384 | 0.8904 | 0.8904 | 0.8904 | 0.8904 |
| 0.0083 | 24.0 | 6360 | 1.6616 | 0.8922 | 0.8922 | 0.8922 | 0.8922 |
| 0.0124 | 25.0 | 6625 | 1.9046 | 0.8951 | 0.8951 | 0.8951 | 0.8951 |
| 0.0154 | 26.0 | 6890 | 1.6547 | 0.8974 | 0.8974 | 0.8974 | 0.8974 |
| 0.0086 | 27.0 | 7155 | 1.6440 | 0.8932 | 0.8932 | 0.8932 | 0.8932 |
| 0.0077 | 28.0 | 7420 | 1.7566 | 0.8941 | 0.8941 | 0.8941 | 0.8941 |
| 0.0076 | 29.0 | 7685 | 1.7419 | 0.8937 | 0.8937 | 0.8937 | 0.8937 |
| 0.0078 | 30.0 | 7950 | 1.7587 | 0.8941 | 0.8941 | 0.8941 | 0.8941 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
declare-lab/starling-7B
|
declare-lab
| 2024-03-04T14:05:15Z | 58 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:declare-lab/HarmfulQA",
"arxiv:2308.09662",
"arxiv:2402.11746",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T16:51:04Z |
---
license: apache-2.0
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- declare-lab/HarmfulQA
model-index:
- name: starling-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
---
[**Paper**](https://arxiv.org/abs/2308.09662) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)| [**Model**](https://huggingface.co/declare-lab/starling-7B)
> 📣 Update 2/02/24: Introducing Resta: **Safety Re-alignment of Language Models**. [**Paper**](https://arxiv.org/abs/2402.11746) [**Github**](https://github.com/declare-lab/resta) [**Dataset**](https://huggingface.co/datasets/declare-lab/CategoricalHarmfulQ)
As a part of our research efforts to make LLMs safer, we created **Starling**. It is obtained by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://arxiv.org/abs/2308.09662)
<img src="https://declare-lab.github.io/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
<img src="https://declare-lab.github.io/assets/images/logos/method.png" alt="Image" width="1000" height="335">
<h2>Experimental Results</h2>
Compared to Vicuna, **Avg. 5.2% reduction in Attack Success Rate** (ASR) on DangerousQA and HarmfulQA using three different prompts.**
Compared to Vicuna, **Avg. 3-7% improvement in HHH score** measured on BBH-HHH benchmark.**
<img src="https://declare-lab.github.io/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">
TruthfulQA (MC2): **48.90 vs Vicuna's 47.00**
MMLU (5-shot): **46.69 vs Vicuna's 47.18**
BBH (3-shot): **33.47 vs Vicuna's 33.05**
<h2>Jailbreak Prompt for harmfulness eval using Red Eval as reported in the paper</h2>
This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.
<img src="https://declare-lab.github.io/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">
<h2>HarmfulQA Data Collection</h2>
We also release our **HarmfulQA** dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details [**here**](https://huggingface.co/datasets/declare-lab/HarmfulQA). The following figure describes the data collection process.
<img src="https://declare-lab.github.io/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
_Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data._
## Citation
```bibtex
@misc{bhardwaj2023redteaming,
title={Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment},
author={Rishabh Bhardwaj and Soujanya Poria},
year={2023},
eprint={2308.09662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_declare-lab__starling-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |50.73|
|AI2 Reasoning Challenge (25-Shot)|51.02|
|HellaSwag (10-Shot) |76.77|
|MMLU (5-Shot) |47.75|
|TruthfulQA (0-shot) |48.18|
|Winogrande (5-shot) |70.56|
|GSM8k (5-shot) |10.08|
|
DouglasPontes/2020-Q4-50p-filtered-prog_from_Q3
|
DouglasPontes
| 2024-03-04T14:04:49Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-2019-90m",
"base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-26T14:38:39Z |
---
license: mit
base_model: cardiffnlp/twitter-roberta-base-2019-90m
tags:
- generated_from_trainer
model-index:
- name: 2020-Q4-50p-filtered-random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q4-50p-filtered-random
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.03 | 8000 | 2.5888 |
| 2.8176 | 0.07 | 16000 | 2.4814 |
| 2.8176 | 0.1 | 24000 | 2.4264 |
| 2.5609 | 0.13 | 32000 | 2.3993 |
| 2.5609 | 0.17 | 40000 | 2.3761 |
| 2.4969 | 0.2 | 48000 | 2.3624 |
| 2.4969 | 0.24 | 56000 | 2.3481 |
| 2.48 | 0.27 | 64000 | 2.3399 |
| 2.48 | 0.3 | 72000 | 2.3289 |
| 2.451 | 0.34 | 80000 | 2.3221 |
| 2.451 | 0.37 | 88000 | 2.3183 |
| 2.4367 | 0.4 | 96000 | 2.3221 |
| 2.4367 | 0.44 | 104000 | 2.3142 |
| 2.4388 | 0.47 | 112000 | 2.3028 |
| 2.4388 | 0.51 | 120000 | 2.3066 |
| 2.4215 | 0.54 | 128000 | 2.3013 |
| 2.4215 | 0.57 | 136000 | 2.3039 |
| 2.4178 | 0.61 | 144000 | 2.2907 |
| 2.4178 | 0.64 | 152000 | 2.2996 |
| 2.4103 | 0.67 | 160000 | 2.2943 |
| 2.4103 | 0.71 | 168000 | 2.2900 |
| 2.4122 | 0.74 | 176000 | 2.2902 |
| 2.4122 | 0.77 | 184000 | 2.2961 |
| 2.4173 | 0.81 | 192000 | 2.2906 |
| 2.4173 | 0.84 | 200000 | 2.2925 |
| 2.4067 | 0.88 | 208000 | 2.2911 |
| 2.4067 | 0.91 | 216000 | 2.2844 |
| 2.4059 | 0.94 | 224000 | 2.2855 |
| 2.4059 | 0.98 | 232000 | 2.2811 |
| 2.4089 | 1.01 | 240000 | 2.2788 |
| 2.4089 | 1.04 | 248000 | 2.2796 |
| 2.4034 | 1.08 | 256000 | 2.2827 |
| 2.4034 | 1.11 | 264000 | 2.2803 |
| 2.408 | 1.15 | 272000 | 2.2746 |
| 2.408 | 1.18 | 280000 | 2.2851 |
| 2.3985 | 1.21 | 288000 | 2.2781 |
| 2.3985 | 1.25 | 296000 | 2.2795 |
| 2.4009 | 1.28 | 304000 | 2.2777 |
| 2.4009 | 1.31 | 312000 | 2.2770 |
| 2.4017 | 1.35 | 320000 | 2.2763 |
| 2.4017 | 1.38 | 328000 | 2.2734 |
| 2.4056 | 1.41 | 336000 | 2.2758 |
| 2.4056 | 1.45 | 344000 | 2.2763 |
| 2.4017 | 1.48 | 352000 | 2.2700 |
| 2.4017 | 1.52 | 360000 | 2.2736 |
| 2.3993 | 1.55 | 368000 | 2.2763 |
| 2.3993 | 1.58 | 376000 | 2.2792 |
| 2.3994 | 1.62 | 384000 | 2.2666 |
| 2.3994 | 1.65 | 392000 | 2.2699 |
| 2.3969 | 1.68 | 400000 | 2.2753 |
| 2.3969 | 1.72 | 408000 | 2.2707 |
| 2.4094 | 1.75 | 416000 | 2.2731 |
| 2.4094 | 1.79 | 424000 | 2.2709 |
| 2.4102 | 1.82 | 432000 | 2.2623 |
| 2.4102 | 1.85 | 440000 | 2.2751 |
| 2.4042 | 1.89 | 448000 | 2.2728 |
| 2.4042 | 1.92 | 456000 | 2.2714 |
| 2.3991 | 1.95 | 464000 | 2.2634 |
| 2.3991 | 1.99 | 472000 | 2.2695 |
| 2.3976 | 2.02 | 480000 | 2.2731 |
| 2.3976 | 2.05 | 488000 | 2.2736 |
| 2.4019 | 2.09 | 496000 | 2.2803 |
| 2.4019 | 2.12 | 504000 | 2.2699 |
| 2.4044 | 2.16 | 512000 | 2.2731 |
| 2.4044 | 2.19 | 520000 | 2.2709 |
| 2.3989 | 2.22 | 528000 | 2.2716 |
| 2.3989 | 2.26 | 536000 | 2.2668 |
| 2.4068 | 2.29 | 544000 | 2.2728 |
| 2.4068 | 2.32 | 552000 | 2.2709 |
| 2.4047 | 2.36 | 560000 | 2.2683 |
| 2.4047 | 2.39 | 568000 | 2.2731 |
| 2.3976 | 2.43 | 576000 | 2.2676 |
| 2.3976 | 2.46 | 584000 | 2.2736 |
| 2.3994 | 2.49 | 592000 | 2.2624 |
| 2.3994 | 2.53 | 600000 | 2.2773 |
| 2.3997 | 2.56 | 608000 | 2.2719 |
| 2.3997 | 2.59 | 616000 | 2.2701 |
| 2.4042 | 2.63 | 624000 | 2.2695 |
| 2.4042 | 2.66 | 632000 | 2.2666 |
| 2.3994 | 2.69 | 640000 | 2.2719 |
| 2.3994 | 2.73 | 648000 | 2.2686 |
| 2.3953 | 2.76 | 656000 | 2.2623 |
| 2.3953 | 2.8 | 664000 | 2.2662 |
| 2.402 | 2.83 | 672000 | 2.2707 |
| 2.402 | 2.86 | 680000 | 2.2662 |
| 2.3929 | 2.9 | 688000 | 2.2726 |
| 2.3929 | 2.93 | 696000 | 2.2682 |
| 2.3977 | 2.96 | 704000 | 2.2634 |
| 2.3977 | 3.0 | 712000 | 2.2685 |
| 2.4022 | 3.03 | 720000 | 2.2693 |
| 2.4022 | 3.07 | 728000 | 2.2666 |
| 2.4046 | 3.1 | 736000 | 2.2690 |
| 2.4046 | 3.13 | 744000 | 2.2641 |
| 2.3977 | 3.17 | 752000 | 2.2658 |
| 2.3977 | 3.2 | 760000 | 2.2645 |
| 2.4015 | 3.23 | 768000 | 2.2619 |
| 2.4015 | 3.27 | 776000 | 2.2671 |
| 2.393 | 3.3 | 784000 | 2.2694 |
| 2.393 | 3.33 | 792000 | 2.2662 |
| 2.3907 | 3.37 | 800000 | 2.2691 |
| 2.3907 | 3.4 | 808000 | 2.2679 |
| 2.3987 | 3.44 | 816000 | 2.2688 |
| 2.3987 | 3.47 | 824000 | 2.2655 |
| 2.4116 | 3.5 | 832000 | 2.2668 |
| 2.4116 | 3.54 | 840000 | 2.2675 |
| 2.3913 | 3.57 | 848000 | 2.2689 |
| 2.3913 | 3.6 | 856000 | 2.2642 |
| 2.3974 | 3.64 | 864000 | 2.2667 |
| 2.3974 | 3.67 | 872000 | 2.2717 |
| 2.4046 | 3.71 | 880000 | 2.2661 |
| 2.4046 | 3.74 | 888000 | 2.2705 |
| 2.4006 | 3.77 | 896000 | 2.2637 |
| 2.4006 | 3.81 | 904000 | 2.2635 |
| 2.3987 | 3.84 | 912000 | 2.2642 |
| 2.3987 | 3.87 | 920000 | 2.2691 |
| 2.4068 | 3.91 | 928000 | 2.2689 |
| 2.4068 | 3.94 | 936000 | 2.2730 |
| 2.4092 | 3.97 | 944000 | 2.2644 |
| 2.4092 | 4.01 | 952000 | 2.2706 |
| 2.4035 | 4.04 | 960000 | 2.2671 |
| 2.4035 | 4.08 | 968000 | 2.2562 |
| 2.4005 | 4.11 | 976000 | 2.2622 |
| 2.4005 | 4.14 | 984000 | 2.2642 |
| 2.406 | 4.18 | 992000 | 2.2625 |
| 2.406 | 4.21 | 1000000 | 2.2662 |
| 2.3972 | 4.24 | 1008000 | 2.2658 |
| 2.3972 | 4.28 | 1016000 | 2.2668 |
| 2.3937 | 4.31 | 1024000 | 2.2593 |
| 2.3937 | 4.35 | 1032000 | 2.2712 |
| 2.3982 | 4.38 | 1040000 | 2.2695 |
| 2.3982 | 4.41 | 1048000 | 2.2684 |
| 2.4034 | 4.45 | 1056000 | 2.2643 |
| 2.4034 | 4.48 | 1064000 | 2.2665 |
| 2.3996 | 4.51 | 1072000 | 2.2692 |
| 2.3996 | 4.55 | 1080000 | 2.2628 |
| 2.4054 | 4.58 | 1088000 | 2.2673 |
| 2.4054 | 4.61 | 1096000 | 2.2577 |
| 2.4039 | 4.65 | 1104000 | 2.2671 |
| 2.4039 | 4.68 | 1112000 | 2.2586 |
| 2.4033 | 4.72 | 1120000 | 2.2730 |
| 2.4033 | 4.75 | 1128000 | 2.2655 |
| 2.4036 | 4.78 | 1136000 | 2.2694 |
| 2.4036 | 4.82 | 1144000 | 2.2630 |
| 2.4036 | 4.85 | 1152000 | 2.2618 |
| 2.4036 | 4.88 | 1160000 | 2.2665 |
| 2.4005 | 4.92 | 1168000 | 2.2609 |
| 2.4005 | 4.95 | 1176000 | 2.2617 |
| 2.4065 | 4.99 | 1184000 | 2.2646 |
| 2.4065 | 5.02 | 1192000 | 2.2606 |
| 2.4044 | 5.05 | 1200000 | 2.2656 |
| 2.4044 | 5.09 | 1208000 | 2.2630 |
| 2.3997 | 5.12 | 1216000 | 2.2737 |
| 2.3997 | 5.15 | 1224000 | 2.2762 |
| 2.407 | 5.19 | 1232000 | 2.2669 |
| 2.407 | 5.22 | 1240000 | 2.2695 |
| 2.4013 | 5.25 | 1248000 | 2.2704 |
| 2.4013 | 5.29 | 1256000 | 2.2612 |
| 2.4118 | 5.32 | 1264000 | 2.2654 |
| 2.4118 | 5.36 | 1272000 | 2.2683 |
| 2.3953 | 5.39 | 1280000 | 2.2628 |
| 2.3953 | 5.42 | 1288000 | 2.2605 |
| 2.3973 | 5.46 | 1296000 | 2.2667 |
| 2.3973 | 5.49 | 1304000 | 2.2640 |
| 2.4027 | 5.52 | 1312000 | 2.2619 |
| 2.4027 | 5.56 | 1320000 | 2.2687 |
| 2.3967 | 5.59 | 1328000 | 2.2598 |
| 2.3967 | 5.63 | 1336000 | 2.2621 |
| 2.4028 | 5.66 | 1344000 | 2.2602 |
| 2.4028 | 5.69 | 1352000 | 2.2713 |
| 2.4053 | 5.73 | 1360000 | 2.2623 |
| 2.4053 | 5.76 | 1368000 | 2.2697 |
| 2.3987 | 5.79 | 1376000 | 2.2638 |
| 2.3987 | 5.83 | 1384000 | 2.2601 |
| 2.3987 | 5.86 | 1392000 | 2.2642 |
| 2.3987 | 5.89 | 1400000 | 2.2656 |
| 2.401 | 5.93 | 1408000 | 2.2712 |
| 2.401 | 5.96 | 1416000 | 2.2639 |
| 2.4011 | 6.0 | 1424000 | 2.2646 |
| 2.4011 | 6.03 | 1432000 | 2.2669 |
| 2.4022 | 6.06 | 1440000 | 2.2619 |
| 2.4022 | 6.1 | 1448000 | 2.2580 |
| 2.3998 | 6.13 | 1456000 | 2.2612 |
| 2.3998 | 6.16 | 1464000 | 2.2652 |
| 2.3999 | 6.2 | 1472000 | 2.2610 |
| 2.3999 | 6.23 | 1480000 | 2.2567 |
| 2.3984 | 6.27 | 1488000 | 2.2590 |
| 2.3984 | 6.3 | 1496000 | 2.2565 |
| 2.4017 | 6.33 | 1504000 | 2.2658 |
| 2.4017 | 6.37 | 1512000 | 2.2626 |
| 2.4055 | 6.4 | 1520000 | 2.2656 |
| 2.4055 | 6.43 | 1528000 | 2.2622 |
| 2.3959 | 6.47 | 1536000 | 2.2691 |
| 2.3959 | 6.5 | 1544000 | 2.2604 |
| 2.4016 | 6.53 | 1552000 | 2.2599 |
| 2.4016 | 6.57 | 1560000 | 2.2655 |
| 2.3986 | 6.6 | 1568000 | 2.2684 |
| 2.3986 | 6.64 | 1576000 | 2.2716 |
| 2.4051 | 6.67 | 1584000 | 2.2605 |
| 2.4051 | 6.7 | 1592000 | 2.2569 |
| 2.4057 | 6.74 | 1600000 | 2.2687 |
| 2.4057 | 6.77 | 1608000 | 2.2571 |
| 2.3956 | 6.8 | 1616000 | 2.2664 |
| 2.3956 | 6.84 | 1624000 | 2.2612 |
| 2.4048 | 6.87 | 1632000 | 2.2643 |
| 2.4048 | 6.91 | 1640000 | 2.2633 |
| 2.4042 | 6.94 | 1648000 | 2.2634 |
| 2.4042 | 6.97 | 1656000 | 2.2637 |
| 2.4008 | 7.01 | 1664000 | 2.2619 |
| 2.4008 | 7.04 | 1672000 | 2.2579 |
| 2.397 | 7.07 | 1680000 | 2.2628 |
| 2.397 | 7.11 | 1688000 | 2.2593 |
| 2.4044 | 7.14 | 1696000 | 2.2593 |
| 2.4044 | 7.17 | 1704000 | 2.2613 |
| 2.3979 | 7.21 | 1712000 | 2.2685 |
| 2.3979 | 7.24 | 1720000 | 2.2683 |
| 2.4017 | 7.28 | 1728000 | 2.2611 |
| 2.4017 | 7.31 | 1736000 | 2.2672 |
| 2.4017 | 7.34 | 1744000 | 2.2577 |
| 2.4017 | 7.38 | 1752000 | 2.2609 |
| 2.4018 | 7.41 | 1760000 | 2.2567 |
| 2.4018 | 7.44 | 1768000 | 2.2661 |
| 2.3905 | 7.48 | 1776000 | 2.2671 |
| 2.3905 | 7.51 | 1784000 | 2.2663 |
| 2.4063 | 7.55 | 1792000 | 2.2619 |
| 2.4063 | 7.58 | 1800000 | 2.2587 |
| 2.4015 | 7.61 | 1808000 | 2.2584 |
| 2.4015 | 7.65 | 1816000 | 2.2580 |
| 2.3984 | 7.68 | 1824000 | 2.2586 |
| 2.3984 | 7.71 | 1832000 | 2.2620 |
| 2.3962 | 7.75 | 1840000 | 2.2584 |
| 2.3962 | 7.78 | 1848000 | 2.2607 |
| 2.3998 | 7.81 | 1856000 | 2.2638 |
| 2.3998 | 7.85 | 1864000 | 2.2629 |
| 2.4005 | 7.88 | 1872000 | 2.2716 |
| 2.4005 | 7.92 | 1880000 | 2.2623 |
| 2.4006 | 7.95 | 1888000 | 2.2555 |
| 2.4006 | 7.98 | 1896000 | 2.2653 |
| 2.3946 | 8.02 | 1904000 | 2.2629 |
| 2.3946 | 8.05 | 1912000 | 2.2654 |
| 2.3983 | 8.08 | 1920000 | 2.2623 |
| 2.3983 | 8.12 | 1928000 | 2.2544 |
| 2.4038 | 8.15 | 1936000 | 2.2605 |
| 2.4038 | 8.19 | 1944000 | 2.2622 |
| 2.399 | 8.22 | 1952000 | 2.2600 |
| 2.399 | 8.25 | 1960000 | 2.2629 |
| 2.3983 | 8.29 | 1968000 | 2.2621 |
| 2.3983 | 8.32 | 1976000 | 2.2609 |
| 2.4059 | 8.35 | 1984000 | 2.2705 |
| 2.4059 | 8.39 | 1992000 | 2.2572 |
| 2.4058 | 8.42 | 2000000 | 2.2602 |
| 2.4058 | 8.45 | 2008000 | 2.2626 |
| 2.3954 | 8.49 | 2016000 | 2.2668 |
| 2.3954 | 8.52 | 2024000 | 2.2599 |
| 2.3932 | 8.56 | 2032000 | 2.2643 |
| 2.3932 | 8.59 | 2040000 | 2.2559 |
| 2.4001 | 8.62 | 2048000 | 2.2614 |
| 2.4001 | 8.66 | 2056000 | 2.2577 |
| 2.3912 | 8.69 | 2064000 | 2.2665 |
| 2.3912 | 8.72 | 2072000 | 2.2576 |
| 2.4015 | 8.76 | 2080000 | 2.2672 |
| 2.4015 | 8.79 | 2088000 | 2.2598 |
| 2.4015 | 8.83 | 2096000 | 2.2599 |
| 2.4015 | 8.86 | 2104000 | 2.2641 |
| 2.399 | 8.89 | 2112000 | 2.2612 |
| 2.399 | 8.93 | 2120000 | 2.2607 |
| 2.3963 | 8.96 | 2128000 | 2.2633 |
| 2.3963 | 8.99 | 2136000 | 2.2567 |
| 2.3957 | 9.03 | 2144000 | 2.2630 |
| 2.3957 | 9.06 | 2152000 | 2.2597 |
| 2.3943 | 9.09 | 2160000 | 2.2624 |
| 2.3943 | 9.13 | 2168000 | 2.2599 |
| 2.4025 | 9.16 | 2176000 | 2.2578 |
| 2.4025 | 9.2 | 2184000 | 2.2640 |
| 2.3944 | 9.23 | 2192000 | 2.2562 |
| 2.3944 | 9.26 | 2200000 | 2.2660 |
| 2.3964 | 9.3 | 2208000 | 2.2556 |
| 2.3964 | 9.33 | 2216000 | 2.2697 |
| 2.4026 | 9.36 | 2224000 | 2.2652 |
| 2.4026 | 9.4 | 2232000 | 2.2571 |
| 2.398 | 9.43 | 2240000 | 2.2555 |
| 2.398 | 9.47 | 2248000 | 2.2607 |
| 2.4038 | 9.5 | 2256000 | 2.2558 |
| 2.4038 | 9.53 | 2264000 | 2.2660 |
| 2.4027 | 9.57 | 2272000 | 2.2587 |
| 2.4027 | 9.6 | 2280000 | 2.2605 |
| 2.4025 | 9.63 | 2288000 | 2.2578 |
| 2.4025 | 9.67 | 2296000 | 2.2609 |
| 2.3969 | 9.7 | 2304000 | 2.2597 |
| 2.3969 | 9.73 | 2312000 | 2.2619 |
| 2.3886 | 9.77 | 2320000 | 2.2645 |
| 2.3886 | 9.8 | 2328000 | 2.2717 |
| 2.3942 | 9.84 | 2336000 | 2.2627 |
| 2.3942 | 9.87 | 2344000 | 2.2582 |
| 2.396 | 9.9 | 2352000 | 2.2634 |
| 2.396 | 9.94 | 2360000 | 2.2582 |
| 2.3998 | 9.97 | 2368000 | 2.2643 |
| 2.3998 | 10.0 | 2376000 | 2.2690 |
| 2.4014 | 10.04 | 2384000 | 2.2655 |
| 2.4014 | 10.07 | 2392000 | 2.2660 |
| 2.4004 | 10.11 | 2400000 | 2.2650 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
11sam11/my-dog-is-xzf
|
11sam11
| 2024-03-04T14:02:08Z | 0 | 1 |
diffusers
|
[
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T13:56:00Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### my-dog-is-xzf Dreambooth model trained by 11sam11 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: TCEP034
Sample pictures of this concept:
.jpg)
.jpg)
|
machinelearningzuu/gemma-2b-biotech
|
machinelearningzuu
| 2024-03-04T14:01:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-01T05:26:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
freshpearYoon/v3_free_all_re_4000
|
freshpearYoon
| 2024-03-04T13:57:11Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-04T03:31:50Z |
---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-large-v3
metrics:
- wer
model-index:
- name: whisper_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_finetune
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the aihub_100000 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3754
- Cer: 6.9474
- Wer: 28.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.4274 | 0.14 | 1000 | 0.3982 | 6.9437 | 28.4443 |
| 0.3884 | 0.28 | 2000 | 0.3754 | 6.9474 | 28.5714 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Weni/ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0-merged
|
Weni
| 2024-03-04T13:53:38Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T13:40:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KapitalK/course_eval_gpt2_large
|
KapitalK
| 2024-03-04T13:52:58Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-large",
"base_model:adapter:openai-community/gpt2-large",
"region:us"
] | null | 2024-03-04T09:40:51Z |
---
library_name: peft
base_model: openai-community/gpt2-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
nlpguy/AlloyIngot
|
nlpguy
| 2024-03-04T13:48:30Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:merge:Gille/StrangeMerges_21-7B-slerp",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:merge:eren23/dpo-binarized-NeutrixOmnibe-7B",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-13T13:00:43Z |
---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- eren23/dpo-binarized-NeutrixOmnibe-7B
- Gille/StrangeMerges_21-7B-slerp
model-index:
- name: AlloyIngot
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.12
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: Gille/StrangeMerges_21-7B-slerp
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.3, 0.5, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.7, 0.5, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: eren23/dpo-binarized-NeutrixOmnibe-7B
- layer_range: [0, 32]
model:
model:
path: Gille/StrangeMerges_21-7B-slerp
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngot)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.20|
|AI2 Reasoning Challenge (25-Shot)|73.98|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.83|
|TruthfulQA (0-shot) |75.12|
|Winogrande (5-shot) |85.08|
|GSM8k (5-shot) |69.14|
|
nlpguy/Hermes-low-tune
|
nlpguy
| 2024-03-04T13:48:26Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:openaccess-ai-collective/dpopenhermes-alpha-v0",
"base_model:finetune:openaccess-ai-collective/dpopenhermes-alpha-v0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T22:33:51Z |
---
license: apache-2.0
tags:
- mergekit
- merge
base_model:
- openaccess-ai-collective/dpopenhermes-alpha-v0
- simonveitner/Math-OpenHermes-2.5-Mistral-7B
model-index:
- name: Hermes-low-tune
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [openaccess-ai-collective/dpopenhermes-alpha-v0](https://huggingface.co/openaccess-ai-collective/dpopenhermes-alpha-v0)
* [simonveitner/Math-OpenHermes-2.5-Mistral-7B](https://huggingface.co/simonveitner/Math-OpenHermes-2.5-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: simonveitner/Math-OpenHermes-2.5-Mistral-7B
dtype: float16
merge_method: slerp
parameters:
t:
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: simonveitner/Math-OpenHermes-2.5-Mistral-7B
- layer_range: [0, 32]
model: openaccess-ai-collective/dpopenhermes-alpha-v0
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Hermes-low-tune)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.18|
|AI2 Reasoning Challenge (25-Shot)|63.99|
|HellaSwag (10-Shot) |83.75|
|MMLU (5-Shot) |63.60|
|TruthfulQA (0-shot) |51.37|
|Winogrande (5-shot) |77.90|
|GSM8k (5-shot) |62.47|
|
nlpguy/Hermes-low-tune-3
|
nlpguy
| 2024-03-04T13:48:20Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nlpguy/Hermes-low-tune-2",
"base_model:merge:nlpguy/Hermes-low-tune-2",
"base_model:openaccess-ai-collective/DPOpenHermes-7B-v2",
"base_model:merge:openaccess-ai-collective/DPOpenHermes-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T16:22:47Z |
---
license: apache-2.0
tags:
- mergekit
- merge
base_model:
- nlpguy/Hermes-low-tune-2
- openaccess-ai-collective/DPOpenHermes-7B-v2
model-index:
- name: Hermes-low-tune-3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.94
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nlpguy/Hermes-low-tune-2](https://huggingface.co/nlpguy/Hermes-low-tune-2)
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: openaccess-ai-collective/DPOpenHermes-7B-v2
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.25
slices:
- sources:
- layer_range: [0, 32]
model: openaccess-ai-collective/DPOpenHermes-7B-v2
- layer_range: [0, 32]
model: nlpguy/Hermes-low-tune-2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Hermes-low-tune-3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.25|
|AI2 Reasoning Challenge (25-Shot)|66.21|
|HellaSwag (10-Shot) |84.99|
|MMLU (5-Shot) |63.74|
|TruthfulQA (0-shot) |57.94|
|Winogrande (5-shot) |78.77|
|GSM8k (5-shot) |63.84|
|
nlpguy/Hermes-low-tune-2
|
nlpguy
| 2024-03-04T13:48:13Z | 59 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B-laser",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B-laser",
"base_model:openaccess-ai-collective/dpopenhermes-alpha-v0",
"base_model:merge:openaccess-ai-collective/dpopenhermes-alpha-v0",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:merge:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T11:59:53Z |
---
license: apache-2.0
tags:
- mergekit
- merge
base_model:
- simonveitner/Math-OpenHermes-2.5-Mistral-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B-laser
- openaccess-ai-collective/dpopenhermes-alpha-v0
- teknium/OpenHermes-2.5-Mistral-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
model-index:
- name: Hermes-low-tune-2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-2
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) as a base.
### Models Merged
The following models were included in the merge:
* [simonveitner/Math-OpenHermes-2.5-Mistral-7B](https://huggingface.co/simonveitner/Math-OpenHermes-2.5-Mistral-7B)
* [mlabonne/NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)
* [openaccess-ai-collective/dpopenhermes-alpha-v0](https://huggingface.co/openaccess-ai-collective/dpopenhermes-alpha-v0)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: teknium/OpenHermes-2.5-Mistral-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: teknium/OpenHermes-2.5-Mistral-7B
- layer_range: [0, 32]
model: simonveitner/Math-OpenHermes-2.5-Mistral-7B
parameters:
weight: 0.25
- layer_range: [0, 32]
model: openaccess-ai-collective/dpopenhermes-alpha-v0
parameters:
weight: 0.25
- layer_range: [0, 32]
model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
weight: 0.25
- layer_range: [0, 32]
model: mlabonne/NeuralHermes-2.5-Mistral-7B-laser
parameters:
weight: 0.25
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Hermes-low-tune-2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.04|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |84.47|
|MMLU (5-Shot) |63.69|
|TruthfulQA (0-shot) |53.18|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |63.53|
|
nlpguy/ColorShadow-7B
|
nlpguy
| 2024-03-04T13:48:06Z | 1,553 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T13:13:34Z |
---
license: apache-2.0
tags:
- merge
model-index:
- name: ColorShadow-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.56
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B
name: Open LLM Leaderboard
---
# ColorShadow-7B
This is a Gradient-SLERP merge between diffnamehard/Mistral-CatMacaroni-slerp-7B and cookinai/Valkyrie-V1 performed using mergekit.
Here is the config file used:
```
slices:
- sources:
- model: diffnamehard/Mistral-CatMacaroni-slerp-7B
layer_range: [0, 32]
- model: cookinai/Valkyrie-V1
layer_range: [0, 32]
merge_method: slerp
base_model: diffnamehard/Mistral-CatMacaroni-slerp-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.34|
|AI2 Reasoning Challenge (25-Shot)|67.83|
|HellaSwag (10-Shot) |85.15|
|MMLU (5-Shot) |61.69|
|TruthfulQA (0-shot) |59.56|
|Winogrande (5-shot) |80.58|
|GSM8k (5-shot) |55.19|
|
nlpguy/AlloyIngotNeoX
|
nlpguy
| 2024-03-04T13:47:34Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gille/StrangeMerges_20-7B-slerp",
"base_model:merge:Gille/StrangeMerges_20-7B-slerp",
"base_model:bardsai/jaskier-7b-dpo-v4.3",
"base_model:merge:bardsai/jaskier-7b-dpo-v4.3",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T09:33:54Z |
---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- bardsai/jaskier-7b-dpo-v4.3
- Gille/StrangeMerges_20-7B-slerp
model-index:
- name: AlloyIngotNeoX
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 74.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3)
* [Gille/StrangeMerges_20-7B-slerp](https://huggingface.co/Gille/StrangeMerges_20-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: Gille/StrangeMerges_20-7B-slerp
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.1, 0.3, 0.5, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.7, 0.5, 0.3, 0.1]
- value: 0.45
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: Gille/StrangeMerges_20-7B-slerp
- layer_range: [0, 32]
model:
model:
path: bardsai/jaskier-7b-dpo-v4.3
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.21|
|AI2 Reasoning Challenge (25-Shot)|74.32|
|HellaSwag (10-Shot) |89.07|
|MMLU (5-Shot) |64.97|
|TruthfulQA (0-shot) |74.57|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |69.83|
|
nlpguy/ColorShadow-7B-v3
|
nlpguy
| 2024-03-04T13:47:22Z | 1,556 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T20:01:26Z |
---
license: apache-2.0
tags:
- merge
model-index:
- name: ColorShadow-7B-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
---
# ColorShadow-7B-v3
This is a Gradient-SLERP merge between ColorShadow-7B and Terminis-7B performed using mergekit.
Here is the config file used:
```
slices:
- sources:
- model: nlpguy/ColorShadow-7B
layer_range: [0, 32]
- model: Q-bert/Terminis-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nlpguy/ColorShadow-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.29|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |85.04|
|MMLU (5-Shot) |60.57|
|TruthfulQA (0-shot) |62.88|
|Winogrande (5-shot) |80.11|
|GSM8k (5-shot) |47.54|
|
ChuckMcSneed/SMaxxxer-v1-70b
|
ChuckMcSneed
| 2024-03-04T13:42:38Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T12:12:12Z |
---
license: llama2
tags:
- merge
- mergekit
model-index:
- name: SMaxxxer-v1-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b
name: Open LLM Leaderboard
---
# BABE WAKE UP NEW MEME MODELS JUST DROPPED
Ladies and Gentlemen!
I present to you
*drum roll*
THE BENCHBREAKERS!
- [PMaxxxer](https://huggingface.co/ChuckMcSneed/PMaxxxer-v1-70b) (The Good)
- [SMaxxxer](https://huggingface.co/ChuckMcSneed/SMaxxxer-v1-70b) (The Bad)
- [BenchmaxxxerPS](https://huggingface.co/ChuckMcSneed/BenchmaxxxerPS-v1-123b) (The Ugly)
These three **interesting** models were designed in attempt to break [my own meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) and well... they failed. The results are interesting nontheless.
# SMAXXXER
The aggressor, the angry and dumb hobo that will roleplay with you. This meme model was designed to break the stylized writing test, and it kinda did, still can't surpass ChatGPT though.
For its creation [lzlv](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) was TIES-merged with [spicyboros](https://huggingface.co/jondurbin/spicyboros-70b-2.2), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit).
# PMAXXXER
The overly politically correct SJW university dropout, the failed writer that's not really good at anything. This meme model was designed to break the poems test and it's an absolute failure.
For its creation [WinterGoddess](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2) was TIES-merged with [euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit).
# BENCHMAXXXER PS
The true meme model. Goliath-style frankenmerge of SMAXXXER and PMAXXXER. You might think: "Oh it's a frankenmerge, the characteristics of the models will even out, right?" This is completely wrong in this case, here characteristics of the models add up. You get an angry hobo stuck with an SJW in the same fucking body! It will assault you and then immediately apologize for it! Then it will assault you again! And apologize again! Kinda funny. It also has a bit different writing style compared to Goliath.
Is it worth using over Goliath? Not really. However, if you have fast internet and patience to try a 123b meme model, go for it!
# FAILED MODELS(not gonna upload)
## BENCHMAXXXER SP
Frankenmerge of SMAXXXER and PMAXXXER, just like BENCHMAXXXER PS, but in different order. Has severe brain damage, clearly the influence of the hobo is strong in this one.
## BENCHMAXXXER SS
Self-merge of SMAXXXER, a bit less dumb and a bit less aggresive than the original SMAXXER.
## BENCHMAXXXER MOE
2x70B MOE merge of SMAXXXER and PMAXXXER, unremarkable. Not smart, not angry. Just averaged out.
# PROMPT FORMAT
Alpaca.
```
### Instruction:
{instruction}
### Input:
{input}
### Response:
```
# Benchmarks
## NeoEvalPlusN
[My meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) which the models were designed to break.
| Test name | goliath-120b |PMaxxxer-v1-70b |SMaxxxer-v1-70b |BenchmaxxxerPS-v1-123b |BenchmaxxxerSP-v1-123b |BenchmaxxxerSS-v1-123b |BenchmaxxxerMOE-v1-123b |
| -------- | ------- | -------- | ------- | -------- | ------- | ------- | -------- |
| B | 3 | 3 |2 |3 |1.5 |1.5|2|
| C | 2 | 1 |1 |2 |2 |2|1|
| D | 1 | 1 |0 |1 |1 |0.5|3|
| S | 5 | 6.75 |7.25 |7.25 |6.75 |6.5|7.25|
| P | 6 | 4.75 |4.25 |5.25 |5.25 |5.5|5|
| Total | 17 | 16.5 |14.5 |18.5 |16.5 |16|18.25|
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|PMaxxxer-v1-70b |72.41 |71.08|87.88 |70.39|59.77 |82.64 |62.7 |
|SMaxxxer-v1-70b |72.23 |70.65|88.02 |70.55|60.7 |82.87 |60.58|
|Difference |0.18 |0.43 |-0.14 |-0.16|-0.93 |-0.23 |2.12 |
Performance here is decent. It was #5 on the leaderboard among 70b models when I submitted it. This leaderboard is currently quite useless though, some 7b braindead meme merges have high scores there, claiming to be the next GPT4. At least I don't pretend that my models aren't a meme.
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__SMaxxxer-v1-70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.23|
|AI2 Reasoning Challenge (25-Shot)|70.65|
|HellaSwag (10-Shot) |88.02|
|MMLU (5-Shot) |70.55|
|TruthfulQA (0-shot) |60.70|
|Winogrande (5-shot) |82.87|
|GSM8k (5-shot) |60.58|
|
Instinct-AI/InstinctiveDiffuse
|
Instinct-AI
| 2024-03-04T13:39:17Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T11:36:27Z |
---
library_name: diffusers
---
# InstinctiveDiffuse
InstinctiveDiffuse is fine-tuned from `StableDiffusionXL` base model that carefully trained on 100,000 free-copyright images, and carefully DPO tuned.
the base model is `stabilityai/stable-diffusion-xl-base-1.0` and the model has improved on being sharper in image generation with prompt tag of ~sharp~.
## Usage
the usage of the model is available via github of [InstinctiveDiffuse](https://github.com/erfanzar/InstinctiveDiffuse).
please also consider using these keep words to improve your prompts: best quality, ocean render, HD, --version 8.
## Direct Use
```python
import torch
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
base = "Instinct-AI/InstinctiveDiffuse"
pipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
pipe.enable_model_cpu_offload()
pipe(
"An astronaut riding a yellow horse on mars",
num_inference_steps=8,
guidance_scale=1
).images[0]
```
|
jan-hq/Mistral-7B-Instruct-v0.2-DARE
|
jan-hq
| 2024-03-04T13:36:21Z | 15 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-12T09:12:10Z |
---
language:
- en
license: apache-2.0
model-index:
- name: Mistral-7B-Instruct-v0.2-DARE
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 75.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a
href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `DARE` method to merge [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) with 3 leading models in 12th Dec on [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
3. [v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
- base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
The yaml config file for this model is here:
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16
merge_method: dare_ties
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.8
weight: 0.4
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
density: 0.8
weight: 0.3
- model: v1olet/v1olet_marcoroni-go-bruins-merge-7B
parameters:
density: 0.8
weight: 0.3
parameters:
int8_mask: true
```
# Prompt template:
- **ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- **Alpaca**
```
{system_message}
### Instruction:
{prompt}
### Response:
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__Mistral-7B-Instruct-v0.2-DARE)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.84|
|AI2 Reasoning Challenge (25-Shot)|61.95|
|HellaSwag (10-Shot) |75.62|
|MMLU (5-Shot) |49.99|
|TruthfulQA (0-shot) |54.36|
|Winogrande (5-shot) |74.98|
|GSM8k (5-shot) |18.12|
|
jan-hq/stealth-v1.3
|
jan-hq
| 2024-03-04T13:36:19Z | 1,353 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T01:46:19Z |
---
language:
- en
license: apache-2.0
model-index:
- name: stealth-v1.3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.71
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__stealth-v1.3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.12|
|AI2 Reasoning Challenge (25-Shot)|67.49|
|HellaSwag (10-Shot) |86.74|
|MMLU (5-Shot) |64.45|
|TruthfulQA (0-shot) |55.71|
|Winogrande (5-shot) |80.74|
|GSM8k (5-shot) |71.57|
|
jan-hq/LlamaCorn-1.1B
|
jan-hq
| 2024-03-04T13:36:18Z | 29 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:jan-hq/bagel_sft_binarized",
"dataset:jan-hq/dolphin_binarized",
"dataset:jan-hq/openhermes_binarized",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T10:54:51Z |
---
license: apache-2.0
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/bagel_sft_binarized
- jan-hq/dolphin_binarized
- jan-hq/openhermes_binarized
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model-index:
- name: LlamaCorn-sft-adapter
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LlamaCorn-sft-adapter
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the jan-hq/bagel_sft_binarized, the jan-hq/dolphin_binarized and the jan-hq/openhermes_binarized datasets.
It achieves the following results on the evaluation set:
- Loss: 0.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.038 | 1.0 | 6606 | 1.0506 |
| 0.876 | 2.0 | 13212 | 0.9648 |
| 0.7713 | 3.0 | 19818 | 0.9638 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__LlamaCorn-1.1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.94|
|AI2 Reasoning Challenge (25-Shot)|34.13|
|HellaSwag (10-Shot) |59.33|
|MMLU (5-Shot) |29.01|
|TruthfulQA (0-shot) |36.78|
|Winogrande (5-shot) |61.96|
|GSM8k (5-shot) | 0.45|
|
jan-hq/stealth-v2
|
jan-hq
| 2024-03-04T13:36:12Z | 117 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-24T10:27:55Z |
---
language:
- en
license: apache-2.0
model-index:
- name: stealth-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.47
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 88.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training detail
You can read [here](https://huggingface.co/jan-hq/stealth-finance-v1-adapter).
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__stealth-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.37|
|AI2 Reasoning Challenge (25-Shot)|73.89|
|HellaSwag (10-Shot) |89.26|
|MMLU (5-Shot) |64.94|
|TruthfulQA (0-shot) |72.47|
|Winogrande (5-shot) |88.00|
|GSM8k (5-shot) |69.67|
|
MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
|
MaziyarPanahi
| 2024-03-04T13:36:12Z | 61 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-12T21:31:58Z |
---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-S2-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
name: Open LLM Leaderboard
---
# MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
Merge of top 7B models with SLERP method.
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
>
> ## Eval
> ```python
> {
"all": {
"acc": 0.6545868511485138,
"acc_stderr": 0.031980293841566164,
"acc_norm": 0.6542757501692061,
"acc_norm_stderr": 0.03263807517879597,
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6217500644350165,
"mc2_stderr": 0.015583825644663436
},
"harness|arc:challenge|25": {
"acc": 0.6723549488054608,
"acc_stderr": 0.01371584794071934,
"acc_norm": 0.6945392491467577,
"acc_norm_stderr": 0.01346008047800251
},
"harness|hellaswag|10": {
"acc": 0.7046405098585939,
"acc_stderr": 0.0045527183605131,
"acc_norm": 0.871539533957379,
"acc_norm_stderr": 0.0033391798350182853
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754407,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754407
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083522,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083522
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165612,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165612
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8352490421455939,
"acc_stderr": 0.013265346261323788,
"acc_norm": 0.8352490421455939,
"acc_norm_stderr": 0.013265346261323788
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7543352601156069,
"acc_stderr": 0.023176298203992005,
"acc_norm": 0.7543352601156069,
"acc_norm_stderr": 0.023176298203992005
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4547486033519553,
"acc_stderr": 0.016653875777524006,
"acc_norm": 0.4547486033519553,
"acc_norm_stderr": 0.016653875777524006
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.0248480182638752,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.0248480182638752
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.02378858355165854,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.02378858355165854
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4745762711864407,
"acc_stderr": 0.012753716929101008,
"acc_norm": 0.4745762711864407,
"acc_norm_stderr": 0.012753716929101008
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7095588235294118,
"acc_stderr": 0.027576468622740536,
"acc_norm": 0.7095588235294118,
"acc_norm_stderr": 0.027576468622740536
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.01866335967146367,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.01866335967146367
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6217500644350165,
"mc2_stderr": 0.015583825644663436
},
"harness|winogrande|5": {
"acc": 0.7963693764798737,
"acc_stderr": 0.011317798781626913
},
"harness|gsm8k|5": {
"acc": 0.7202426080363912,
"acc_stderr": 0.01236438401673532
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.57|
|AI2 Reasoning Challenge (25-Shot)|69.45|
|HellaSwag (10-Shot) |87.15|
|MMLU (5-Shot) |64.98|
|TruthfulQA (0-shot) |62.18|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |72.02|
|
MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
MaziyarPanahi
| 2024-03-04T13:35:31Z | 29 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space",
"conversational",
"base_model:one-man-army/UNA-34Beagles-32K-bf16-v1",
"base_model:finetune:one-man-army/UNA-34Beagles-32K-bf16-v1",
"license:apache-2.0",
"model-index"
] |
text-generation
| 2024-01-18T10:56:48Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- dataset:ai2_arc
- dataset:unalignment/spicy-3.1
- dataset:codeparrot/apps
- dataset:facebook/belebele
- dataset:boolq
- dataset:jondurbin/cinematika-v0.1
- dataset:drop
- dataset:lmsys/lmsys-chat-1m
- dataset:TIGER-Lab/MathInstruct
- dataset:cais/mmlu
- dataset:Muennighoff/natural-instructions
- dataset:openbookqa
- dataset:piqa
- dataset:Vezora/Tested-22k-Python-Alpaca
- dataset:cakiki/rosetta-code
- dataset:Open-Orca/SlimOrca
- dataset:spider
- dataset:squad_v2
- dataset:migtissera/Synthia-v1.3
- dataset:datasets/winogrande
- dataset:nvidia/HelpSteer
- dataset:Intel/orca_dpo_pairs
- dataset:unalignment/toxic-dpo-v0.1
- dataset:jondurbin/truthy-dpo-v0.1
- dataset:allenai/ultrafeedback_binarized_cleaned
- dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned
- dataset:LDJnr/Capybara
- dataset:JULIELab/EmoBank
- dataset:kingbri/PIPPA-shareGPT
- license:other
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
model_name: UNA-34Beagles-32K-bf16-v1-GPTQ
base_model: one-man-army/UNA-34Beagles-32K-bf16-v1
inference: false
model_creator: one-man-army
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
model-index:
- name: UNA-34Beagles-32K-bf16-v1-GPTQ
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 26.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
---
# Description
[MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ](https://huggingface.co/MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ) is a quantized (GPTQ) version of [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__UNA-34Beagles-32K-bf16-v1-GPTQ)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.15|
|AI2 Reasoning Challenge (25-Shot)|26.11|
|HellaSwag (10-Shot) |26.29|
|MMLU (5-Shot) |24.43|
|TruthfulQA (0-shot) |47.27|
|Winogrande (5-shot) |50.83|
|GSM8k (5-shot) | 0.00|
|
MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
|
MaziyarPanahi
| 2024-03-04T13:35:17Z | 9 | 5 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"alpaca",
"dataset:tatsu-lab/alpaca",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T21:16:13Z |
---
license: apache-2.0
library_name: transformers
tags:
- mistral
- alpaca
datasets:
- tatsu-lab/alpaca
pipeline_tag: text-generation
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-7B-Alpaca-52k-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1
name: Open LLM Leaderboard
---
# Description
`mistralai/Mistral-7B-v0.1` model fine-tuned over 52k alpaca dataset
# How to use it
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
model_id="MaziyarPanahi/Mistral-7B-Alpaca-52k-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
temperature=0.1,
do_sample=True,
top_p=0.95,
repetition_penalty=1.15,
return_full_text=False,
streamer=streamer
)
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
describe about pros and cons of docker system. Answer in bullet point
### Response:
"""
res = pipe(prompt)[0]['generated_text']
```
Results:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
describe about pros and cons of docker system. Answer in bullet point
### Response:
Pros of Docker System:
- Improved portability - Docker containers can be easily moved between different environments, making it easier to deploy applications across multiple platforms.
- Increased security - Containers are isolated from each other, which helps prevent malicious code from spreading throughout the system.
- Better resource utilization - Containers allow for better resource management by allowing users to run multiple applications on a single host without having to worry about conflicts or performance issues.
Cons of Docker System:
- Learning curve - It takes time to learn how to use Docker effectively, as there are many commands and concepts involved.
- Limited customization options - While Docker provides some basic configuration options, more advanced features such as network routing require additional tools.
- Performance overhead - Running multiple containers on a single host may result in slower performance due to increased memory usage.</s>
```
## Eval
```python
{
"all": {
"acc": 0.6309850839451187,
"acc_stderr": 0.032333688535613636,
"acc_norm": 0.6368691004374645,
"acc_norm_stderr": 0.03298401757997533,
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.41501661742948026,
"mc2_stderr": 0.014285902986671931
},
"harness|arc:challenge|25": {
"acc": 0.5750853242320819,
"acc_stderr": 0.014445698968520767,
"acc_norm": 0.6092150170648464,
"acc_norm_stderr": 0.01425856388051378
},
"harness|hellaswag|10": {
"acc": 0.6221868153754232,
"acc_stderr": 0.0048384969668239025,
"acc_norm": 0.8212507468631747,
"acc_norm_stderr": 0.0038235918141330347
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6447368421052632,
"acc_stderr": 0.038947344870133176,
"acc_norm": 0.6447368421052632,
"acc_norm_stderr": 0.038947344870133176
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.02872750295788027,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.02872750295788027
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.047028804320496165,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.047028804320496165
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3915343915343915,
"acc_stderr": 0.025138091388851116,
"acc_norm": 0.3915343915343915,
"acc_norm_stderr": 0.025138091388851116
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.02489246917246283,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.02489246917246283
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526066,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526066
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015184,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015184
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6358974358974359,
"acc_stderr": 0.024396672985094764,
"acc_norm": 0.6358974358974359,
"acc_norm_stderr": 0.024396672985094764
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.029318203645206865,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.029318203645206865
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6218487394957983,
"acc_stderr": 0.03149930577784906,
"acc_norm": 0.6218487394957983,
"acc_norm_stderr": 0.03149930577784906
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8146788990825689,
"acc_stderr": 0.016659279700295838,
"acc_norm": 0.8146788990825689,
"acc_norm_stderr": 0.016659279700295838
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.028626547912437406,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.028626547912437406
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243838,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243838
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709218,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709218
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077816,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077816
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973136,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973136
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2335195530726257,
"acc_stderr": 0.014149575348976269,
"acc_norm": 0.2335195530726257,
"acc_norm_stderr": 0.014149575348976269
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.024954184324879905,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.024954184324879905
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.025006469755799215,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.025006469755799215
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303062,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4498044328552803,
"acc_stderr": 0.012705721498565107,
"acc_norm": 0.4498044328552803,
"acc_norm_stderr": 0.012705721498565107
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6580882352941176,
"acc_stderr": 0.02881472242225418,
"acc_norm": 0.6580882352941176,
"acc_norm_stderr": 0.02881472242225418
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6519607843137255,
"acc_stderr": 0.019270998708223974,
"acc_norm": 0.6519607843137255,
"acc_norm_stderr": 0.019270998708223974
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274645,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274645
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.02484575321230604,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.02484575321230604
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160896,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160896
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.41501661742948026,
"mc2_stderr": 0.014285902986671931
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698332
},
"harness|gsm8k|5": {
"acc": 0.37452615617892343,
"acc_stderr": 0.013331774158491393
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Mistral-7B-Alpaca-52k-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.46|
|AI2 Reasoning Challenge (25-Shot)|60.92|
|HellaSwag (10-Shot) |82.13|
|MMLU (5-Shot) |63.41|
|TruthfulQA (0-shot) |41.50|
|Winogrande (5-shot) |77.35|
|GSM8k (5-shot) |37.45|
|
MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
|
MaziyarPanahi
| 2024-03-04T13:34:57Z | 65 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-12T22:11:13Z |
---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-S5-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 86.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
name: Open LLM Leaderboard
---
Merge of top 7B models and the SLERP of other 7B models
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
## Eval

```python
{
"all": {
"acc": 0.6564118716978186,
"acc_stderr": 0.03200912848183244,
"acc_norm": 0.6553902167958241,
"acc_norm_stderr": 0.03268788255929441,
"mc1": 0.5312117503059975,
"mc1_stderr": 0.01746936487457752,
"mc2": 0.6758096547963126,
"mc2_stderr": 0.015381620483561457
},
"harness|arc:challenge|25": {
"acc": 0.6919795221843004,
"acc_stderr": 0.013491429517292038,
"acc_norm": 0.7252559726962458,
"acc_norm_stderr": 0.013044617212771227
},
"harness|hellaswag|10": {
"acc": 0.7234614618601872,
"acc_stderr": 0.004463721071319078,
"acc_norm": 0.8870742879904402,
"acc_norm_stderr": 0.0031585512705264054
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.03208115750788684,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.03208115750788684
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138208,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138208
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.020986854593289733,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.020986854593289733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.02925290592725197,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.02925290592725197
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.03038835355188679,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.03038835355188679
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8385321100917431,
"acc_stderr": 0.015776239256163224,
"acc_norm": 0.8385321100917431,
"acc_norm_stderr": 0.015776239256163224
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250447,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250447
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944856,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944856
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281365,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903341,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903341
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.02344582627654554,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.02344582627654554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45251396648044695,
"acc_stderr": 0.016646914804438778,
"acc_norm": 0.45251396648044695,
"acc_norm_stderr": 0.016646914804438778
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.02555316999182652,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.02555316999182652
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7561728395061729,
"acc_stderr": 0.02389187954195961,
"acc_norm": 0.7561728395061729,
"acc_norm_stderr": 0.02389187954195961
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47327249022164275,
"acc_stderr": 0.01275197796767601,
"acc_norm": 0.47327249022164275,
"acc_norm_stderr": 0.01275197796767601
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.02824568739146292,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.02824568739146292
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162673,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162673
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5312117503059975,
"mc1_stderr": 0.01746936487457752,
"mc2": 0.6758096547963126,
"mc2_stderr": 0.015381620483561457
},
"harness|winogrande|5": {
"acc": 0.861878453038674,
"acc_stderr": 0.00969698839367458
},
"harness|gsm8k|5": {
"acc": 0.7081122062168309,
"acc_stderr": 0.012522795894420867
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S5-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.14|
|AI2 Reasoning Challenge (25-Shot)|72.53|
|HellaSwag (10-Shot) |88.71|
|MMLU (5-Shot) |65.01|
|TruthfulQA (0-shot) |67.58|
|Winogrande (5-shot) |86.19|
|GSM8k (5-shot) |70.81|
|
MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
|
MaziyarPanahi
| 2024-03-04T13:34:39Z | 61 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-12T18:48:40Z |
---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-D-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.32
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-D-v0.1
name: Open LLM Leaderboard
---
Merge of top 7B models with DARE method
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
## Eval
```python
{
"all": {
"acc": 0.6547370286177235,
"acc_stderr": 0.03204709242170183,
"acc_norm": 0.6537337854798912,
"acc_norm_stderr": 0.03272317883588649,
"mc1": 0.5189718482252142,
"mc1_stderr": 0.01749089640576236,
"mc2": 0.6631825155680797,
"mc2_stderr": 0.01527641053841743
},
"harness|arc:challenge|25": {
"acc": 0.6953924914675768,
"acc_stderr": 0.013449522109932485,
"acc_norm": 0.7175767918088737,
"acc_norm_stderr": 0.013155456884097225
},
"harness|hellaswag|10": {
"acc": 0.7120095598486357,
"acc_stderr": 0.004519011688417168,
"acc_norm": 0.8820952001593309,
"acc_norm_stderr": 0.003218362717491129
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736411,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736411
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.02548718714785938,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.02548718714785938
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.03192271569548301,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.03192271569548301
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402538,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402538
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083008,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083008
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590167,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590167
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467618,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467618
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8275862068965517,
"acc_stderr": 0.013507943909371798,
"acc_norm": 0.8275862068965517,
"acc_norm_stderr": 0.013507943909371798
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.02353292543104429,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.02353292543104429
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4312849162011173,
"acc_stderr": 0.016563829399047707,
"acc_norm": 0.4312849162011173,
"acc_norm_stderr": 0.016563829399047707
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4726205997392438,
"acc_stderr": 0.012751075788015055,
"acc_norm": 0.4726205997392438,
"acc_norm_stderr": 0.012751075788015055
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.018950886770806315,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.018950886770806315
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5189718482252142,
"mc1_stderr": 0.01749089640576236,
"mc2": 0.6631825155680797,
"mc2_stderr": 0.01527641053841743
},
"harness|winogrande|5": {
"acc": 0.8437253354380426,
"acc_stderr": 0.01020535179187352
},
"harness|gsm8k|5": {
"acc": 0.7172100075815011,
"acc_stderr": 0.012405020417873619
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-D-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.54|
|AI2 Reasoning Challenge (25-Shot)|71.76|
|HellaSwag (10-Shot) |88.21|
|MMLU (5-Shot) |64.86|
|TruthfulQA (0-shot) |66.32|
|Winogrande (5-shot) |84.37|
|GSM8k (5-shot) |71.72|
|
MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
|
MaziyarPanahi
| 2024-03-04T13:33:57Z | 60 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-12T20:01:44Z |
---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-T-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.78
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
---
Merge of top 7B models with TIES method
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
## Eval

```python
{
"all": {
"acc": 0.6487801278765712,
"acc_stderr": 0.03219011246717665,
"acc_norm": 0.6479445077777353,
"acc_norm_stderr": 0.032868022907407396,
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7078078883926877,
"mc2_stderr": 0.015097515102384168
},
"harness|arc:challenge|25": {
"acc": 0.7167235494880546,
"acc_stderr": 0.013167478735134575,
"acc_norm": 0.7363481228668942,
"acc_norm_stderr": 0.012875929151297044
},
"harness|hellaswag|10": {
"acc": 0.7321250746863175,
"acc_stderr": 0.004419469983939178,
"acc_norm": 0.8884684325831508,
"acc_norm_stderr": 0.0031414591751392717
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754407,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754407
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055277,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055277
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7967741935483871,
"acc_stderr": 0.02289168798455496,
"acc_norm": 0.7967741935483871,
"acc_norm_stderr": 0.02289168798455496
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.020986854593289733,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.020986854593289733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394848,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394848
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.03086868260412162,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.03086868260412162
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.015703498348461763,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.015703498348461763
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5046296296296297,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.5046296296296297,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.02730348459906944,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.02730348459906944
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.80970625798212,
"acc_stderr": 0.014036945850381396,
"acc_norm": 0.80970625798212,
"acc_norm_stderr": 0.014036945850381396
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258172,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258172
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.47150837988826816,
"acc_stderr": 0.016695329746015796,
"acc_norm": 0.47150837988826816,
"acc_norm_stderr": 0.016695329746015796
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035457,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.029790719243829727,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.029790719243829727
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.470013037809648,
"acc_stderr": 0.01274724896707906,
"acc_norm": 0.470013037809648,
"acc_norm_stderr": 0.01274724896707906
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.028582709753898445,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.028582709753898445
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6584967320261438,
"acc_stderr": 0.019184639328092487,
"acc_norm": 0.6584967320261438,
"acc_norm_stderr": 0.019184639328092487
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7078078883926877,
"mc2_stderr": 0.015097515102384168
},
"harness|winogrande|5": {
"acc": 0.8579321231254933,
"acc_stderr": 0.009812000391679367
},
"harness|gsm8k|5": {
"acc": 0.6648976497346475,
"acc_stderr": 0.013001948176422954
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-T-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.96|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.85|
|MMLU (5-Shot) |64.22|
|TruthfulQA (0-shot) |70.78|
|Winogrande (5-shot) |85.79|
|GSM8k (5-shot) |66.49|
|
ZaaCo/distilbert-base-uncased-finetuned-sst2
|
ZaaCo
| 2024-03-04T13:26:58Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-04T13:18:09Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.8830
- F1: 0.8830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6626 | 1.0 | 109 | 0.3359 | 0.8716 | 0.8714 |
| 0.2921 | 2.0 | 218 | 0.3073 | 0.8830 | 0.8830 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
Weni/ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0
|
Weni
| 2024-03-04T13:19:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-04T11:35:09Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5854 | 0.13 | 100 | 0.5553 |
| 0.5226 | 0.27 | 200 | 0.5098 |
| 0.5001 | 0.4 | 300 | 0.4974 |
| 0.4918 | 0.53 | 400 | 0.4868 |
| 0.4844 | 0.67 | 500 | 0.4795 |
| 0.4759 | 0.8 | 600 | 0.4751 |
| 0.4773 | 0.94 | 700 | 0.4733 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ninja21/q-FrozenLake-v1-4x4-noSlippery
|
ninja21
| 2024-03-04T13:18:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T13:18:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="ninja21/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Unbabel/TowerBase-13B-v0.1
|
Unbabel
| 2024-03-04T13:18:13Z | 276 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"arxiv:2402.17733",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-26T17:43:54Z |
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerBase-13B-v0.1
## Model Details
### Model Description
TowerBase-13B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-13B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 13B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-13B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
LN1996/output_run_1
|
LN1996
| 2024-03-04T13:16:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-04T13:07:24Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a room with professional interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - LN1996/output_run_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a room with professional interior design using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
manishiitg/open-aditi-hi-v2
|
manishiitg
| 2024-03-04T13:14:38Z | 10 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"hi",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-05T09:51:32Z |
---
language:
- hi
- en
license: apache-2.0
base_model: teknium/OpenHermes-2.5
model-index:
- name: open-aditi-hi-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=manishiitg/open-aditi-hi-v2
name: Open LLM Leaderboard
---
Model trained on Hindi and English data.
Try it out: https://colab.research.google.com/drive/1A_hbsq1vrCeAh3dEMvtwxxNxcNZ1BUyW?usp=sharing
For sample responose on different prompts checkout: https://github.com/manishiitg/hi-llm-eval
#### Language Hi
| Model | implicit_hate | flores | indicwikibio | hellaswag-indic | truthfulqa-hi | boolq-hi | indicheadline | indic-arc-easy | indicqa | indic-arc-challenge | indicsentiment | xlsum-hi | indicxparaphrase | mmlu_hi |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| open-aditi-hi-v2 | 11.5021 | 43.6822 | 0.4846 | 0.2404 | 0.6934 | 0.8541 | 0.4565 | 0.4979 | 0.0795 | 0.4462 | 0.9729 | 0.4213 | 0.6838 | 0.3253 |
| OpenHermes-2.5-Mistral-7B | 0.2068 | 30.3465 | 0.3332 | 0.2485 | 0.3234 | 0.5979 | 0.1996 | 0.3523 | 0.2721 | 0.3396 | 0.9048 | 0.1774 | 0.8766 | 0.2769 |
| open-aditi-hi-v1 | 8.6105 | 40.2376 | 0.4104 | 0.0848 | 0.4230 | 0.3758 | 0.4248 | 0.3889 | 0.1306 | 0.3558 | 0.8798 | 0.4212 | 0.5939 | 0.1398 |
| Airavata | 0.0663 | 58.0555 | 0.0637 | 0.0254 | 0.2122 | 0.0373 | 0.4346 | 0.1128 | 0.1008 | 0.0836 | 0.8437 | 0.4650 | 0.3277 | 0.1336 |
#### Language En
| Model | boolq | hellaswag | mmlu | truthfulqa | xlsum | arc-easy-exact | arc-challenge |
| --- | --- | --- | --- | --- | --- | --- | --- |
| OpenHermes-2.5-Mistral-7B | 0.4061 | 0.7999 | 0.5991 | 0.2081 | 0.4328 | 0.8687 | 0.7790 |
| open-aditi-hi-v2 | 0.3982 | 0.4738 | 0.5544 | 0.2999 | 0.4349 | 0.8388 | 0.7235 |
| open-aditi-hi-v1 | 0.0434 | 0.3509 | 0.2597 | 0.3317 | 0.4288 | 0.7588 | 0.6271 |
| Airavata | 0.0437 | 0.0277 | 0.1165 | 0.3586 | 0.4393 | 0.2534 | 0.1630 |
Task: flores Metric: chrf
Task: implicit_hate Metric: chrf
Task: indicsentiment Metric: accuracy
Task: indicxparaphrase Metric: accuracy
Task: boolq-hi Metric: accuracy
Task: truthfulqa-hi Metric: accuracy
Task: indic-arc-easy Metric: accuracy
Task: indicwikibio Metric: bleurt
Task: xlsum-hi Metric: bleurt
Task: indicheadline Metric: bleurt
Task: indic-arc-challenge Metric: accuracy
Task: mmlu_hi Metric: average_acc
Task: indicqa Metric: accuracy
Task: hellaswag-indic Metric: accuracy
Task: arc-easy-exact Metric: accuracy
Task: hellaswag Metric: accuracy
Task: arc-challenge Metric: accuracy
Task: mmlu Metric: average_acc
Task: xlsum Metric: bleurt
Task: boolq Metric: accuracy
Task: truthfulqa Metric: accuracy
Model evaluation on OpenLLM LeaderBoard


# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.31|
|AI2 Reasoning Challenge (25-Shot)|59.39|
|HellaSwag (10-Shot) |82.01|
|MMLU (5-Shot) |61.41|
|TruthfulQA (0-shot) |45.84|
|Winogrande (5-shot) |77.19|
|GSM8k (5-shot) |30.02|
|
ronibandini/reggaetonbegone
|
ronibandini
| 2024-03-04T13:10:39Z | 0 | 0 | null |
[
"music",
"reggaeton",
"Edge Impulse",
"license:cc-by-4.0",
"region:us"
] | null | 2024-03-03T19:44:43Z |
---
license: cc-by-4.0
tags:
- music
- reggaeton
- Edge Impulse
---
Simple Reggaeton genre classification model trained with Edge Impulse, WAV, 16KHZ, mono.
Labels: reggaeton, otros
Attention: only 6 songs were used for this test, so you should train a new model with more songs for increased accuracy.
For otros (others) label use your ambient sounds and other music genres.
Source code https://github.com/ronibandini/reggaetonBeGone
Tutorial https://www.hackster.io/roni-bandini/reggaeton-be-gone-e5b6e2
|
Kukedlc/NeuTrixOmniBe-7B-model-remix
|
Kukedlc
| 2024-03-04T13:08:11Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/NeuralTrix-7B-dpo",
"paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:CultriX/NeuralTrix-7B-dpo",
"base_model:merge:CultriX/NeuralTrix-7B-dpo",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:merge:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T23:20:07Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- CultriX/NeuralTrix-7B-dpo
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
base_model:
- CultriX/NeuralTrix-7B-dpo
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
model-index:
- name: NeuTrixOmniBe-7B-model-remix
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-7B-model-remix
name: Open LLM Leaderboard
---
# NOT FOR USE -- BUG IN RESPONSE
# NeuTrixOmniBe-7B-model-remix
NeuTrixOmniBe-7B-model-remix is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
* [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CultriX/NeuralTrix-7B-dpo
layer_range: [0, 32]
- model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuTrixOmniBe-7B-model-remix"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuTrixOmniBe-7B-model-remix)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.30|
|AI2 Reasoning Challenge (25-Shot)|72.70|
|HellaSwag (10-Shot) |89.03|
|MMLU (5-Shot) |64.57|
|TruthfulQA (0-shot) |76.90|
|Winogrande (5-shot) |85.08|
|GSM8k (5-shot) |69.52|
|
Kukedlc/Brocae-Area-7B-slerp
|
Kukedlc
| 2024-03-04T13:08:04Z | 49 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"Kukedlc/NeuralKrishna-7B-V2-DPO",
"base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"base_model:merge:Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"base_model:Kukedlc/NeuralKrishna-7B-V2-DPO",
"base_model:merge:Kukedlc/NeuralKrishna-7B-V2-DPO",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T04:54:38Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeuralKrishna-7B-V2-DPO
base_model:
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeuralKrishna-7B-V2-DPO
model-index:
- name: Brocae-Area-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
---
# Brocae-Area-7B-slerp
Brocae-Area-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Fasciculus-Arcuatus-7B-slerp](https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp)
* [Kukedlc/NeuralKrishna-7B-V2-DPO](https://huggingface.co/Kukedlc/NeuralKrishna-7B-V2-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Fasciculus-Arcuatus-7B-slerp
layer_range: [0, 32]
- model: Kukedlc/NeuralKrishna-7B-V2-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralKrishna-7B-V2-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Brocae-Area-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__Brocae-Area-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.86|
|AI2 Reasoning Challenge (25-Shot)|73.81|
|HellaSwag (10-Shot) |88.98|
|MMLU (5-Shot) |64.55|
|TruthfulQA (0-shot) |74.13|
|Winogrande (5-shot) |85.08|
|GSM8k (5-shot) |68.61|
|
nbeerbower/SuperBruphin-3x7B
|
nbeerbower
| 2024-03-04T13:08:03Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"moe",
"conversational",
"base_model:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:finetune:PetroGPT/WestSeverus-7B-DPO-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T04:10:30Z |
---
license: apache-2.0
tags:
- mergekit
- moe
base_model:
- nbeerbower/bruphin-epsilon
- FelixChao/WestSeverus-7B-DPO-v2
- jondurbin/airoboros-m-7b-3.1.2
model-index:
- name: SuperBruphin-3x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.85
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
---
# SuperBruphin-3x7B
This is an experimental MoE model created using [mergekit](https://github.com/cg123/mergekit). (mixtral branch)
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-epsilon](https://huggingface.co/nbeerbower/bruphin-epsilon)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: nbeerbower/bruphin-epsilon
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: nbeerbower/bruphin-epsilon
positive_prompts:
- "Tell a story."
- source_model: FelixChao/WestSeverus-7B-DPO-v2
positive_prompts:
- "Solve this problem."
- source_model: jondurbin/airoboros-m-7b-3.1.2
positive_prompts:
- "Write a letter."
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__SuperBruphin-3x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.75|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |87.74|
|MMLU (5-Shot) |64.58|
|TruthfulQA (0-shot) |66.85|
|Winogrande (5-shot) |81.53|
|GSM8k (5-shot) |70.66|
|
Kukedlc/NeuralKrishna-7B-slerp
|
Kukedlc
| 2024-03-04T13:07:43Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural4gsm8k",
"Kukedlc/NeuralMaxime-7B-slerp",
"conversational",
"base_model:Kukedlc/Neural4gsm8k",
"base_model:merge:Kukedlc/Neural4gsm8k",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"base_model:merge:Kukedlc/NeuralMaxime-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T18:06:51Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural4gsm8k
- Kukedlc/NeuralMaxime-7B-slerp
base_model:
- Kukedlc/Neural4gsm8k
- Kukedlc/NeuralMaxime-7B-slerp
model-index:
- name: NeuralKrishna-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
---
# NeuralKrishna-7B-slerp

NeuralKrishna-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k)
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Neural4gsm8k
layer_range: [0, 32]
- model: Kukedlc/NeuralMaxime-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralMaxime-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralKrishna-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralKrishna-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.79|
|AI2 Reasoning Challenge (25-Shot)|73.46|
|HellaSwag (10-Shot) |88.96|
|MMLU (5-Shot) |64.62|
|TruthfulQA (0-shot) |74.29|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |70.13|
|
Kukedlc/NeuralKrishna-7B-V2-DPO
|
Kukedlc
| 2024-03-04T13:07:18Z | 132 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T04:05:12Z |
---
license: apache-2.0
model-index:
- name: NeuralKrishna-7B-V2-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 74.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-V2-DPO
name: Open LLM Leaderboard
---
# Neural Krishna DPO
## Fine-tuning + lnegth(choose)
- Training Args:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
model.config.use_cache = False
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=120,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=50,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)
# Fine-tune model with DPO
dpo_trainer.train()
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralKrishna-7B-V2-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.00|
|AI2 Reasoning Challenge (25-Shot)|74.06|
|HellaSwag (10-Shot) |88.97|
|MMLU (5-Shot) |64.41|
|TruthfulQA (0-shot) |76.19|
|Winogrande (5-shot) |84.29|
|GSM8k (5-shot) |68.08|
|
KapitalK/course_eval_gpt2_xl
|
KapitalK
| 2024-03-04T13:04:30Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt2",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-large",
"base_model:adapter:openai-community/gpt2-large",
"region:us"
] | null | 2024-03-04T09:29:12Z |
---
library_name: peft
base_model: openai-community/gpt2-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
flammenai/flammen-mistral-7B
|
flammenai
| 2024-03-04T13:04:03Z | 23 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Gille/StrangeMerges_16-7B-slerp",
"base_model:merge:Gille/StrangeMerges_16-7B-slerp",
"base_model:bardsai/jaskier-7b-dpo-v5.6",
"base_model:merge:bardsai/jaskier-7b-dpo-v5.6",
"base_model:nbeerbower/bruphin-zeta",
"base_model:merge:nbeerbower/bruphin-zeta",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T03:37:37Z |
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- bardsai/jaskier-7b-dpo-v5.6
- nbeerbower/bruphin-zeta
- Gille/StrangeMerges_16-7B-slerp
model-index:
- name: flammen
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.02
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/flammen
name: Open LLM Leaderboard
---
# flammen
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-zeta](https://huggingface.co/nbeerbower/bruphin-zeta)
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: bardsai/jaskier-7b-dpo-v5.6
- model: nbeerbower/bruphin-zeta
parameters:
density: 0.5
weight: 0.5
- model: Gille/StrangeMerges_16-7B-slerp
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: bardsai/jaskier-7b-dpo-v5.6
parameters:
normalize: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__flammen)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.74|
|AI2 Reasoning Challenge (25-Shot)|68.17|
|HellaSwag (10-Shot) |87.06|
|MMLU (5-Shot) |64.68|
|TruthfulQA (0-shot) |63.02|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |66.03|
|
Menouar/saqr-7b-beta
|
Menouar
| 2024-03-04T13:01:09Z | 3 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:Menouar/saqr-7b-merged",
"base_model:adapter:Menouar/saqr-7b-merged",
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T09:58:22Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: Menouar/saqr-7b-merged
model-index:
- name: saqr-7b-beta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saqr-7b-beta
This model is a fine-tuned version of [Menouar/saqr-7b-merged](https://huggingface.co/Menouar/saqr-7b-merged) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Menouar__saqr-7b-beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.84|
|AI2 Reasoning Challenge (25-Shot)|47.78|
|HellaSwag (10-Shot) |77.61|
|MMLU (5-Shot) |25.80|
|TruthfulQA (0-shot) |39.38|
|Winogrande (5-shot) |70.56|
|GSM8k (5-shot) | 7.88|
|
pankajmathur/Lima_Unchained_70b
|
pankajmathur
| 2024-03-04T13:00:54Z | 1,480 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:psmathur/lima_unchained_v1",
"arxiv:2305.11206",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T02:49:09Z |
---
language:
- en
license: llama2
library_name: transformers
datasets:
- psmathur/lima_unchained_v1
model-index:
- name: test_42_70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/test_42_70b
name: Open LLM Leaderboard
---
# Lima_Unchained_70b
A Llama2-70b model fine-tuned using QLora on all the linear layers with carefully selected ~900 conversations from the [Lima](https://arxiv.org/pdf/2305.11206.pdf)
<br>
**P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**
## Evaluation
We evaluated Lima_Unchained_70b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||||
|:------:|:--------:|:-------:|:--------:|
|**Task**|**Metric**|**Value**|**Stderr**|
|*arc_challenge*|acc_norm|0.6826|0.0141|
|*hellaswag*|acc_norm|0.8765|0.0038|
|*mmlu*|acc_norm|0.70|0.0351|
|*truthfulqa_mc*|mc2|0.4876|0.0157|
|**Total Average**|-|**0.6867**||
<br>
## Example Usage
Here is the prompt format
```
### User:
Write a stand-up skit in the style of George Carlin that ridicules Pacific Gas and Electric.
### Assistant:
```
Below shows a code example on how to use this model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_path="pankajmathur/Lima_Unchained_70b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
load_in_8bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
#generate text steps
instruction = "Write a stand-up skit in the style of George Carlin that ridicules Pacific Gas and Electric."
prompt = f"### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Lima_Unchained_70b,
author = {Pankaj Mathur},
title = {Lima_Unchained_70b: A LIMA style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/model_42_70b},
}
```
```
@misc{ChuntingZhou,
title={LIMA: Less Is More for Alignment},
author={Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu,
Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy},
year={2023},
eprint={2305.11206},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_42_70b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 58.2 |
| ARC (25-shot) | 68.26 |
| HellaSwag (10-shot) | 87.65 |
| MMLU (5-shot) | 70.0 |
| TruthfulQA (0-shot) | 48.76 |
| Winogrande (5-shot) | 83.66 |
| GSM8K (5-shot) | 34.72 |
| DROP (3-shot) | 14.37 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__test_42_70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.38|
|AI2 Reasoning Challenge (25-Shot)|68.26|
|HellaSwag (10-Shot) |87.65|
|MMLU (5-Shot) |70.00|
|TruthfulQA (0-shot) |48.76|
|Winogrande (5-shot) |83.66|
|GSM8k (5-shot) |45.94|
|
Menouar/phi-2-basic-maths
|
Menouar
| 2024-03-04T13:00:21Z | 5 | 2 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"pytorch",
"text-generation",
"en",
"dataset:gsm8k",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"model-index",
"region:us"
] |
text-generation
| 2024-02-05T20:15:16Z |
---
language:
- en
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- pytorch
datasets:
- gsm8k
base_model: microsoft/phi-2
pipeline_tag: text-generation
model-index:
- name: phi-2-basic-maths
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthfulqa
config: truthfulqa
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.4
name: mc2
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.4
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths
name: Open LLM Leaderboard
---
# phi-2-basic-maths
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an [GSM8K dataset](https://huggingface.co/datasets/gsm8k).
## Model Description
The objective of this model is to evaluate Phi-2's ability to provide correct solutions to reasoning problems after fine-tuning. This model was trained using techniques such as TRL, LoRA quantization, and Flash Attention.
To test it, you can use the following code:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
# Specify the model ID
peft_model_id = "Menouar/phi-2-basic-maths"
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
```
## Training procedure
The complete training procedure can be found on my [Notebook](https://colab.research.google.com/drive/1mvfoEqc0mwuf8FqrABWt06qwAsU2QrvK).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 42
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
The training results can be found on [Tensoboard](https://huggingface.co/Menouar/phi-2-basic-maths/tensorboard).
## Evaluation procedure
The complete Evaluation procedure can be found on my [Notebook](https://colab.research.google.com/drive/1xsdxOm-CgZmLAPFgp8iU9lLFEIIHGiUK).
Accuracy: 36.16%
Unclear answers: 7.81%
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Menouar__phi-2-basic-maths)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.60|
|AI2 Reasoning Challenge (25-Shot)|55.80|
|HellaSwag (10-Shot) |71.15|
|MMLU (5-Shot) |47.27|
|TruthfulQA (0-shot) |41.40|
|Winogrande (5-shot) |75.30|
|GSM8k (5-shot) |30.71|
|
faridlazuarda/data_laundry
|
faridlazuarda
| 2024-03-04T13:00:04Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-04T12:49:18Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- f1
model-index:
- name: data_laundry
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: test
args: en-US
metrics:
- name: F1
type: f1
value: 0.7982630095389778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data_laundry
This model is a fine-tuned version of [](https://huggingface.co/) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0394
- F1: 0.7983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.4021 | 0.5 | 719 | 4.2520 | 0.1592 |
| 3.5936 | 1.0 | 1438 | 3.0222 | 0.4171 |
| 2.6221 | 1.5 | 2157 | 2.2881 | 0.5433 |
| 2.1941 | 2.0 | 2876 | 2.0383 | 0.6238 |
| 1.8688 | 2.5 | 3595 | 1.7730 | 0.6562 |
| 1.7334 | 3.0 | 4314 | 1.6450 | 0.6885 |
| 1.5359 | 3.5 | 5033 | 1.5398 | 0.7121 |
| 1.4867 | 3.99 | 5752 | 1.4296 | 0.7324 |
| 1.353 | 4.49 | 6471 | 1.3721 | 0.7393 |
| 1.3088 | 4.99 | 7190 | 1.3110 | 0.7530 |
| 1.2006 | 5.49 | 7909 | 1.2633 | 0.7482 |
| 1.1845 | 5.99 | 8628 | 1.2417 | 0.7635 |
| 1.111 | 6.49 | 9347 | 1.2015 | 0.7795 |
| 1.0722 | 6.99 | 10066 | 1.1605 | 0.7813 |
| 1.0121 | 7.49 | 10785 | 1.1383 | 0.7781 |
| 1.018 | 7.99 | 11504 | 1.1024 | 0.7990 |
| 0.9202 | 8.49 | 12223 | 1.0905 | 0.7898 |
| 0.9639 | 8.99 | 12942 | 1.0695 | 0.7967 |
| 0.915 | 9.49 | 13661 | 1.0444 | 0.7935 |
| 0.8743 | 9.99 | 14380 | 1.0394 | 0.7983 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
abhinand/malayalam-llama-7b-instruct-v0.1
|
abhinand
| 2024-03-04T12:56:47Z | 456 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"ml",
"arxiv:2311.05845",
"base_model:abhinand/malayalam-llama-pt-v0",
"base_model:finetune:abhinand/malayalam-llama-pt-v0",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T02:20:49Z |
---
language:
- en
- ml
license: llama2
base_model: abhinand/malayalam-llama-pt-v0
model-index:
- name: malayalam-llama-instruct-v0.1
results: []
---
# Malayalam LLaMA 7B Instruct v0.1
Welcome to the inaugural release of the Malayalam LLaMA 7B instruct model – an important step in advancing LLMs for the Malayalam language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Malayalam LLaMA models have been enhanced and tailored specifically with an extensive Malayalam vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Malayalam samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Malayalam.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Tamil LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Tamil LLaMA 7B Base v0.2 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Telugu LLaMA 7B v0.2 Instruct | Instruction/Chat model | ~400k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
## Example Usage
```python
from transformers import LlamaForCausalLM, AutoTokenizer, pipeline
model = LlamaForCausalLM.from_pretrained(
"abhinand/malayalam-llama-instruct-v0.1",
#load_in_8bit=True, # Set this depending on the GPU you have
torch_dtype=torch.bfloat16,
device_map={"": 0}, # Set this depending on the number of GPUs you have
local_files_only=False # Optional
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("abhinand/malayalam-llama-instruct-v0.1")
inf_pipeline = pipeline("conversational", model=model, tokenizer=tokenizer)
def format_instruction(system_prompt, question, return_dict=False):
if system_prompt is None:
messages = [
{'content': question, 'role': 'user'},
]
else:
messages = [
{'content': system_prompt, 'role': 'system'},
{'content': question, 'role': 'user'},
]
if return_dict:
return messages
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
return prompt
# Set the generation configuration according to your needs
temperature = 0.6
repetition_penalty = 1.1
max_new_tokens = 256
SYSTEM_PROMPT = "You are an AI assistant who follows instructions extremely well. Do your best your best to help."
INPUT = "Give 3 tips to lead a healthy life"
instruction = format_instruction(
system_prompt=SYSTEM_PROMPT,
question=INPUT,
return_dict=True,
)
output = inf_pipeline(
instruction,
temperature=temperature,
max_new_tokens=max_new_tokens,
repetition_penalty=repetition_penalty
)
print(output)
```
**Example Output:**
```
Conversation id: d57cdf33-01ff-4328-8efe-5c4fefdd6e77
system: You are an AI assistant who follows instructions extremely well. Do your best your best to help.
user: Give 3 tips to lead a healthy life
assistant: 1. Establish a regular sleep schedule: Aim for 7-9 hours of sleep per night, which is crucial for overall health and well-being.
2. Engage in physical activity regularly: Incorporate exercise into your daily routine, even if it's just a brisk walk or some light stretching. Regular physical activity can help reduce stress, improve mood, and maintain a healthy weight.
3. Eat a balanced diet: Consume a variety of whole grains, lean proteins, fruits, vegetables, and healthy fats. Limit processed foods, sugary drinks, and excessive amounts of sodium.
```
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__malayalam-llama-7b-instruct-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |39.69|
|AI2 Reasoning Challenge (25-Shot)|37.20|
|HellaSwag (10-Shot) |67.81|
|MMLU (5-Shot) |23.12|
|TruthfulQA (0-shot) |47.11|
|Winogrande (5-shot) |62.90|
|GSM8k (5-shot) | 0.00|
|
abhinand/telugu-llama-7b-instruct-v0.1
|
abhinand
| 2024-03-04T12:56:39Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"te",
"arxiv:2311.05845",
"base_model:abhinand/dr-llama-te-instruct-v0",
"base_model:finetune:abhinand/dr-llama-te-instruct-v0",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T01:57:33Z |
---
language:
- en
- te
license: llama2
base_model: abhinand/dr-llama-te-instruct-v0
model-index:
- name: telugu-llama-instruct-v0.1
results: []
---
# Telugu LLaMA 7B Instruct v0.1
Welcome to the inaugural release of the Telugu LLaMA 7B instruct model – an important step in advancing LLMs for the Telugu language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Telugu LLaMA models have been enhanced and tailored specifically with an extensive Telugu vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Telugu samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Telugu.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Tamil LLaMA 7B v0.2 Instruct | Instruction/Chat model | 500k instructions | Tamil LLaMA 7B Base v0.2 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Malayalam LLaMA 7B v0.2 Instruct | Instruction/Chat model | 500k instructions | Malayalam LLaMA 7B Base v0.1 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Example Usage
```python
from transformers import LlamaForCausalLM, AutoTokenizer, pipeline
model = LlamaForCausalLM.from_pretrained(
"abhinand/telugu-llama-instruct-v0.1",
#load_in_8bit=True, # Set this depending on the GPU you have
torch_dtype=torch.bfloat16,
device_map={"": 0}, # Set this depending on the number of GPUs you have
local_files_only=False # Optional
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("abhinand/telugu-llama-instruct-v0.1")
inf_pipeline = pipeline("conversational", model=model, tokenizer=tokenizer)
def format_instruction(system_prompt, question, return_dict=False):
if system_prompt is None:
messages = [
{'content': question, 'role': 'user'},
]
else:
messages = [
{'content': system_prompt, 'role': 'system'},
{'content': question, 'role': 'user'},
]
if return_dict:
return messages
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
return prompt
# Set the generation configuration according to your needs
temperature = 0.6
repetition_penalty = 1.1
max_new_tokens = 256
SYSTEM_PROMPT = "You are an AI assistant who follows instructions extremely well. Do your best your best to help."
INPUT = "Who were the Nizams of Hyderabad?"
instruction = format_instruction(
system_prompt=SYSTEM_PROMPT,
question=INPUT,
return_dict=True,
)
output = inf_pipeline(
instruction,
temperature=temperature,
max_new_tokens=max_new_tokens,
repetition_penalty=repetition_penalty
)
print(output)
```
**Example Output:**
```
Conversation id: d57cdf33-01ff-4328-8efe-5c4fefdd6e77
system: You are an AI assistant who follows instructions extremely well. Do your best your best to help.
user: Who were the Nizams of Hyderabad?
assistant: The Nizams of Hyderabad were a dynasty that ruled the Deccan Plateau in southern India, including the city of Hyderabad. They were known for their wealth and patronage of art and culture. The last Nizam, Mir Osman Ali Khan, was one of the richest people in the world at the time of his death in 1967.
```
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__telugu-llama-7b-instruct-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |39.77|
|AI2 Reasoning Challenge (25-Shot)|37.12|
|HellaSwag (10-Shot) |67.92|
|MMLU (5-Shot) |23.12|
|TruthfulQA (0-shot) |49.05|
|Winogrande (5-shot) |61.40|
|GSM8k (5-shot) | 0.00|
|
abhinand/gemma-2b-it-tamil-v0.1-alpha
|
abhinand
| 2024-03-04T12:56:10Z | 97 | 5 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"ta",
"dataset:vicgalle/alpaca-gpt4",
"dataset:abhinand/tamil-alpaca",
"base_model:abhinand/gemma-2b-tamil",
"base_model:finetune:abhinand/gemma-2b-tamil",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-25T09:59:30Z |
---
language:
- en
- ta
license: other
datasets:
- vicgalle/alpaca-gpt4
- abhinand/tamil-alpaca
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model: abhinand/gemma-2b-tamil
model-index:
- name: gemma-2b-it-tamil-v0.1-alpha
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 50.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.63
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 16.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha
name: Open LLM Leaderboard
---
# Gemma 2B Tamil v0.1 Alpha [Experimental Release]
This is a Tamil instruction finetuned version of Google's Gemma 2B model. This is an experiment to see if Gemma can be adapted for Tamil without expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model.
**Procedure:**
1. The [Gemma base model](https://huggingface.co/google/gemma-2b) was continually pretrained on all available Tamil Wikipedia data for 3 epochs.
2. The updated model was then finetuned on a mix of English and Tamil alpaca datasets for 5 epochs.
> **Note:** This project is currently under development (FOR TAMIL). The initial pretraining phase may not have been extensive enough, which suggests that the model's performance could improve by extending the pretraining on a larger dataset, such as CulturaX.
### 🏆 Benchmarks
This model outperforms Google's Gemma 2B base and instruct models on all benchmarks in Nous evaluation suite. It also surprisingly outperforms [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) (the best performing 2B model in benchmarks as of Feb 25, 2024) despite being a model aimed at language adaptation.
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
|[gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha)[📄](https://gist.github.com/abhinand5/559d542437f6b7060fee94cc1f7861f5)| 39.41| 23.38| 58.94| 43.18| 32.14|
| [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 |
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
## Model description
- **Model type:** A 2B parameter GPT-like model finetuned on 100,000 samples consisting of an equal proportion of English and Tamil samples.
- **Language(s):** Bilingual. English and Tamil.
- **License:** [Google Gemma Terms of Use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [abhinand/gemma-2b-tamil](https://huggingface.co/abhinand/gemma-2b-tamil)
- **Training Precision:** `bfloat16`
- **Training Hardware:** 4x Nvidia RTX 3090 GPUs
- **Training Cost:** $20
## Support my work
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Prompting Format [Alpaca]
**Prompt Template Without Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Response:
{response}
```
**Prompt Template With Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Input:
{input}
### Response:
{response}
```
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha)
| Metric |Value|
|---------------------------------|----:|
|Avg. |47.60|
|AI2 Reasoning Challenge (25-Shot)|50.09|
|HellaSwag (10-Shot) |71.41|
|MMLU (5-Shot) |39.94|
|TruthfulQA (0-shot) |42.63|
|Winogrande (5-shot) |64.96|
|GSM8k (5-shot) |16.60|
|
abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
|
abhinand
| 2024-03-04T12:55:57Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:abhinand/ultrachat_200k_sharegpt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T19:00:10Z |
---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- abhinand/ultrachat_200k_sharegpt
model-index:
- name: TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 33.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 36.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 5.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
---
# TinyLLaMA OpenHermes2.5 [Work in Progress]
This a finetune of TinyLLaMA base model finetuned on [OpenHermes 2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [UltraChat 200k](https://huggingface.co/datasets/abhinand/ultrachat_200k_sharegpt) for a single epoch.
Training was generously supported by [Jarvislabs.ai](https://jarvislabs.ai/).
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
is_llama_derived_model: true
# huggingface repo
datasets:
- path: teknium/OpenHermes-2.5
type: sharegpt
conversation: chatml
train_on_split: train
- path: abhinand/ultrachat_200k_sharegpt
type: sharegpt
conversation: chatml
train_on_split: train
load_in_4bit: false
load_in_8bit: false
bf16: true # require >=ampere
chat_template: chatml
dataset_prepared_path: last_run_prepared_path
hub_model_id: abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v1.0
group_by_length: false
val_set_size: 0.0
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lora_modules_to_save:
- embed_tokens
- lm_head
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
output_dir: /home/tiny-llama/trained_models
gradient_accumulation_steps: 2
micro_batch_size: 32
eval_batch_size: 32
num_epochs: 1
logging_steps: 1
save_steps: 50
save_total_limit: 3
save_safetensors: true
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: "adamw_bnb_8bit"
adam_beta2: 0.95
adam_epsilon: 0.00001
weight_decay: 0.1
learning_rate: 0.0005
max_grad_norm: 1.0
warmup_ratio: 0.05
# warmup_steps: 100
flash_attention: true
# Resume from a specific checkpoint dir
resume_from_checkpoint:
# If resume_from_checkpoint isn't set and you simply want it to start where it left off.
# Be careful with this being turned on between different models.
# auto_resume_from_checkpoints: true
# wandb configuration if you're using it
# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
wandb_project: "tiny-llama-sft"
wandb_name:
wandb_run_id:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
tokens: # these are delimiters
- "<|im_start|>"
- "<|im_end|>"
```
</details>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 476
- num_epochs: 1
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.59|
|AI2 Reasoning Challenge (25-Shot)|33.79|
|HellaSwag (10-Shot) |58.72|
|MMLU (5-Shot) |24.52|
|TruthfulQA (0-shot) |36.22|
|Winogrande (5-shot) |60.93|
|GSM8k (5-shot) | 5.38|
|
abhinand/tamil-llama-7b-instruct-v0.1
|
abhinand
| 2024-03-04T12:55:08Z | 3,718 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ta",
"en",
"arxiv:2311.05845",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-08T03:45:18Z |
---
language:
- ta
- en
license: llama2
model-index:
- name: tamil-llama-7b-instruct-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 48.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-7b-instruct-v0.1
name: Open LLM Leaderboard
---
# Tamil LLaMA 7B Instruct v0.1
Welcome to the inaugural release of the Tamil LLaMA 7B instruct model – an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
## Model description
The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model fine-tuned on [Tamil-Alpaca-Orca](https://huggingface.co/datasets/abhinand/tamil-alpaca-orca) - a mix of Tamil-translated [Stanford-Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and a subset of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) datasets.
- **Language(s):** Tamil and English
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [abhinand/tamil-llama-7b-base-v0.1](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1)
- **Training Precision:** `float16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama)
## Prompting Format
**Prompt Template Without Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Response:
{response}
```
**Prompt Template With Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Input:
{input}
### Response:
{response}
```
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__tamil-llama-7b-instruct-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.52|
|AI2 Reasoning Challenge (25-Shot)|48.04|
|HellaSwag (10-Shot) |70.97|
|MMLU (5-Shot) |39.95|
|TruthfulQA (0-shot) |41.70|
|Winogrande (5-shot) |70.64|
|GSM8k (5-shot) | 1.82|
|
wandb/gemma-7b-zephyr-sft
|
wandb
| 2024-03-04T12:54:59Z | 118 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T11:20:03Z |
---
license: other
library_name: transformers
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: google/gemma-7b
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
- name: gemma-7b-zephyr-sft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.43
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft
name: Open LLM Leaderboard
---
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)
# Gemma 7B Zephyr SFT
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Gemma 7B
## Model description
- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
## Recipe
We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)
## License
This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms)
## Compute provided by Lambda Labs - 8xA100 80GB node
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_wandb__gemma-7b-zephyr-sft)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.64|
|AI2 Reasoning Challenge (25-Shot)|61.43|
|HellaSwag (10-Shot) |80.73|
|MMLU (5-Shot) |60.33|
|TruthfulQA (0-shot) |43.35|
|Winogrande (5-shot) |74.19|
|GSM8k (5-shot) |49.81|
|
wandb/gemma-7b-zephyr-dpo
|
wandb
| 2024-03-04T12:54:13Z | 22 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:wandb/gemma-7b-zephyr-sft",
"base_model:finetune:wandb/gemma-7b-zephyr-sft",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T11:39:50Z |
---
license: other
library_name: transformers
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: wandb/gemma-7b-zephyr-sft
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
- name: gemma-7b-zephyr-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.48
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo
name: Open LLM Leaderboard
---
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)
# Gemma 7B Zephyr DPO
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) DPO recipe applied on top of SFT finetuned Gemma 7B
## Model description
- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [wandb/gemma-7b-zephyr-sft](https://huggingface.co/wandb/gemma-7b-zephyr-sft/)
## Recipe
We trained using the DPO script in [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py) and logging to W&B
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)
## License
This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms)
## Compute provided by [Lambda Labs](https://lambdalabs.com/) - 8xA100 80GB node
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tcapelle__gemma-7b-zephyr-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.62|
|AI2 Reasoning Challenge (25-Shot)|60.84|
|HellaSwag (10-Shot) |80.44|
|MMLU (5-Shot) |60.60|
|TruthfulQA (0-shot) |42.48|
|Winogrande (5-shot) |75.37|
|GSM8k (5-shot) |49.96|
|
leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512
|
leewaay
| 2024-03-04T12:53:24Z | 7 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-04T08:14:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ko
---
# leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512')
model = AutoModel.from_pretrained('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512)
## Training
This model uses the pretrained [`klue/roberta-large`](https://huggingface.co/klue/roberta-large) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli).
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 183 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 18,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 74,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
[Wonseok Lee](https://github.com/leewaay)
|
fhai50032/xLakeChat
|
fhai50032
| 2024-03-04T12:50:58Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"xDAN-AI/xDAN-L1-Chat-RL-v1",
"fhai50032/BeagleLake-7B-Toxic",
"base_model:fhai50032/BeagleLake-7B-Toxic",
"base_model:merge:fhai50032/BeagleLake-7B-Toxic",
"base_model:xDAN-AI/xDAN-L1-Chat-RL-v1",
"base_model:merge:xDAN-AI/xDAN-L1-Chat-RL-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-11T02:43:39Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- xDAN-AI/xDAN-L1-Chat-RL-v1
- fhai50032/BeagleLake-7B-Toxic
base_model:
- xDAN-AI/xDAN-L1-Chat-RL-v1
- fhai50032/BeagleLake-7B-Toxic
model-index:
- name: xLakeChat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.96
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat
name: Open LLM Leaderboard
---
# xLakeChat
xLakeChat is a merge of the following models
* [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1)
* [fhai50032/BeagleLake-7B-Toxic](https://huggingface.co/fhai50032/BeagleLake-7B-Toxic)
## 🧩 Configuration
```yaml
models:
- model: senseable/WestLake-7B-v2
# no params for base model
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
parameters:
weight: 0.73
density: 0.64
- model: fhai50032/BeagleLake-7B-Toxic
parameters:
weight: 0.46
density: 0.55
merge_method: dare_ties
base_model: senseable/WestLake-7B-v2
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fhai50032/xLakeChat"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__xLakeChat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.72|
|AI2 Reasoning Challenge (25-Shot)|62.37|
|HellaSwag (10-Shot) |82.64|
|MMLU (5-Shot) |59.32|
|TruthfulQA (0-shot) |52.96|
|Winogrande (5-shot) |74.74|
|GSM8k (5-shot) |50.27|
|
fhai50032/BeagleLake-7B-Toxic
|
fhai50032
| 2024-03-04T12:50:43Z | 28 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"dataset:Undi95/toxic-dpo-v0.1-NoWarning",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"base_model:fhai50032/BeagleLake-7B",
"base_model:finetune:fhai50032/BeagleLake-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-09T17:33:10Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
datasets:
- Undi95/toxic-dpo-v0.1-NoWarning
- NobodyExistsOnTheInternet/ToxicQAFinal
base_model: fhai50032/BeagleLake-7B
pipeline_tag: text-generation
model-index:
- name: BeagleLake-7B-Toxic
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B-Toxic
name: Open LLM Leaderboard
---
# Uploaded model
- **!Developed by:** fhai50032
- **License:** apache-2.0
- **Finetuned from model :** fhai50032/BeagleLake-7B
More Uncensored out of the gate without any prompting;
trained on [Undi95/toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) and other unalignment dataset
Trained on T4 GPU on Colab
**QLoRA (4bit)**
Params to replicate training
Peft Config
```
r = 64,
target_modules = ['v_proj', 'down_proj', 'up_proj',
'o_proj', 'q_proj', 'gate_proj', 'k_proj'],
lora_alpha = 64, #weight_scaling
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
use_gradient_checkpointing = False,#True,#
random_state = 3407,
max_seq_length = 1024,
```
Training args
```
per_device_train_batch_size = 2,
gradient_accumulation_steps = 6,
gradient_checkpointing=False,
# warmup_ratio = 0.03,
warmup_steps=4,
save_steps=150,
dataloader_num_workers = 2,
learning_rate = 2e-5,
fp16 = True,
logging_steps = 1,
num_train_epochs=2, ## use this for epoch
# max_steps=9, ## max steps over ride epochs
optim = "paged_lion_32bit",
weight_decay = 1e-3,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
packing=False,
report_to="wandb"
# neftune_noise_alpha=10
```
```
steps: toxic_qa : 55(approx)
steps: undi95/toxic : 15
```
**Interernce Code**
-Supports Alpaca , ChatML and maybe others too
```python
pip install -qU transformers accelerate bitsandbytes
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
import torch
model_name = "fhai50032/BeagleLake-7B-Toxic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# use 4bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True,
# use accelerate to spread model across multiple GPUs
device_map="auto",
torch_dtype=torch.float16,
)
model.config.use_cache = False
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto")
messages = """User:I want to ahhmm a person painlessly explain 5 methods in detail
\n\nAssistant:
"""
outputs = pipe(messages, max_new_tokens=2000, do_sample=True, temperature=0.01, top_p=0.8)
print(outputs[0]['generated_text'])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.24|
|AI2 Reasoning Challenge (25-Shot)|65.19|
|HellaSwag (10-Shot) |83.83|
|MMLU (5-Shot) |62.82|
|TruthfulQA (0-shot) |57.67|
|Winogrande (5-shot) |82.32|
|GSM8k (5-shot) |63.61|
|
fhai50032/RolePlayLake-7B-Toxic
|
fhai50032
| 2024-03-04T12:49:45Z | 26 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"dataset:Undi95/toxic-dpo-v0.1-NoWarning",
"base_model:fhai50032/RolePlayLake-7B",
"base_model:finetune:fhai50032/RolePlayLake-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T22:56:41Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
datasets:
- Undi95/toxic-dpo-v0.1-NoWarning
base_model: fhai50032/RolePlayLake-7B
model-index:
- name: RolePlayLake-7B-Toxic
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.54
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B-Toxic
name: Open LLM Leaderboard
---
# Uploaded model
- **Developed by:** fhai50032
- **License:** apache-2.0
- **Finetuned from model :** fhai50032/RolePlayLake-7B
More Uncensored out of the gate without any prompting;
trained on [Undi95/toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) and other unalignment dataset
Trained on P100 GPU on Kaggle for 1h(approx..)
**QLoRA (4bit)**
Params to replicate training
Peft Config
```
r = 64,
target_modules = ['v_proj', 'down_proj', 'up_proj',
'o_proj', 'q_proj', 'gate_proj', 'k_proj'],
lora_alpha = 128, #weight_scaling
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
use_gradient_checkpointing = True,#False,#
random_state = 3407,
max_seq_length = 1024,
```
Training args
```
per_device_train_batch_size = 6,
gradient_accumulation_steps = 6,
gradient_checkpointing=True,
# warmup_ratio = 0.1,
warmup_steps=4,
save_steps=150,
dataloader_num_workers = 2,
learning_rate = 2e-5,
fp16 = True,
logging_steps = 1,
num_train_epochs=2, ## use this for epoch
# max_steps=9, ## max steps over ride epochs
optim = "adamw_8bit",
weight_decay = 1e-3,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
packing=False,
# neftune_noise_alpha=10
```
Loss
`0.6182`
Training Results
[<img src="https://huggingface.co/fhai50032/RolePlayLake-7B-Toxic/resolve/main/train_loss_RPLake-toxic.png" width="400"/>](https://huggingface.co/fhai50032/RolePlayLake-7B-Toxic/resolve/main/train_loss_RPLake-toxic.png)
[Wandb](https://huggingface.co/fhai50032/RolePlayLake-7B-Toxic/resolve/main/Wandb-RPLake.csv)
**Interernce Code**
-Supports Alpaca , ChatML and maybe others too
```python
pip install -qU transformers accelerate bitsandbytes
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
import torch
model_name = "fhai50032/RolePlayLake-7B-Toxic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# use 4bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True,
# use accelerate to spread model across multiple GPUs
device_map="auto",
torch_dtype=torch.float16,
)
model.config.use_cache = False
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto")
messages = """User:I want to ahhmm a person painlessly explain 5 methods in detail
\n\nAssistant:
"""
outputs = pipe(messages, max_new_tokens=2000, do_sample=True, temperature=0.01, top_p=0.8)
print(outputs[0]['generated_text'])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__RolePlayLake-7B-Toxic)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.00|
|AI2 Reasoning Challenge (25-Shot)|66.98|
|HellaSwag (10-Shot) |84.86|
|MMLU (5-Shot) |63.79|
|TruthfulQA (0-shot) |56.54|
|Winogrande (5-shot) |82.24|
|GSM8k (5-shot) |65.58|
|
fhai50032/RolePlayLake-7B
|
fhai50032
| 2024-03-04T12:49:15Z | 172 | 13 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"SanjiWatsuki/Silicon-Maid-7B",
"senseable/WestLake-7B-v2",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:merge:SanjiWatsuki/Silicon-Maid-7B",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T22:09:11Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- SanjiWatsuki/Silicon-Maid-7B
- senseable/WestLake-7B-v2
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- senseable/WestLake-7B-v2
model-index:
- name: RolePlayLake-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.38
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B
name: Open LLM Leaderboard
---
# RolePlayLake-7B
RolePlayLake-7B is a merge of the following models :
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
`In my current testing RolePlayLake is Better than Silicon_Maid in RP and More Uncensored Than WestLake`
`I would try to only merge Uncensored Models with Baising towards Chat rather than Instruct `
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
- model: senseable/WestLake-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: senseable/WestLake-7B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fhai50032/RolePlayLake-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# Why I Merged WestLake and Silicon Maid
Merged WestLake and Silicon Maid for a unique blend:
1. **EQ-Bench Dominance:** WestLake's 79.75 EQ-Bench score. (Maybe Contaminated)
2. **Charm and Role-Play:** Silicon's explicit charm and WestLake's role-play prowess.
3. **Config Synergy:** Supports lots of prompt format out of the gate and has a very nice synergy
Result: RolePlayLake-7B, a linguistic fusion with EQ-Bench supremacy and captivating role-play potential.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__RolePlayLake-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.54|
|AI2 Reasoning Challenge (25-Shot)|70.56|
|HellaSwag (10-Shot) |87.42|
|MMLU (5-Shot) |64.55|
|TruthfulQA (0-shot) |64.38|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |65.05|
|
fhai50032/SamChat
|
fhai50032
| 2024-03-04T12:48:29Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"Intel/neural-chat-7b-v3-3",
"cognitivecomputations/samantha-mistral-7b",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:merge:Intel/neural-chat-7b-v3-3",
"base_model:cognitivecomputations/samantha-mistral-7b",
"base_model:merge:cognitivecomputations/samantha-mistral-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T23:23:15Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- Intel/neural-chat-7b-v3-3
- cognitivecomputations/samantha-mistral-7b
base_model:
- Intel/neural-chat-7b-v3-3
- cognitivecomputations/samantha-mistral-7b
model-index:
- name: SamChat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat
name: Open LLM Leaderboard
---
# SamChat
SamChat is a merge of the following models
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b)
## 🧩 Configuration
```yaml
models:
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
# no params for base model
- model: Intel/neural-chat-7b-v3-3
parameters:
weight: 0.55
density: 0.46
- model: cognitivecomputations/samantha-mistral-7b
parameters:
weight: 0.64
density: 0.55
merge_method: dare_ties
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fhai50032/SamChat"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__SamChat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.68|
|AI2 Reasoning Challenge (25-Shot)|62.20|
|HellaSwag (10-Shot) |81.88|
|MMLU (5-Shot) |59.70|
|TruthfulQA (0-shot) |52.89|
|Winogrande (5-shot) |72.14|
|GSM8k (5-shot) |41.24|
|
zjunlp/MolGen-large-opt
|
zjunlp
| 2024-03-04T12:44:17Z | 81 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"molecular language model",
"SELFIES",
"molecule optimization",
"arxiv:2301.11259",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-02-19T09:31:54Z |
---
tags:
- molecular language model
- SELFIES
- molecule optimization
inference: false
---
# MolGen-large-opt
MolGen-large-opt was introduced in the paper ["Domain-Agnostic Molecular Generation with Self-feedback"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen).
## Model description
MolGen-large-opt is the fine-tuned version of [MolGen-large](https://huggingface.co/zjunlp/MolGen-large). MolGen-large is the first pre-trained model that only produces chemically valid molecules.
With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms.
Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder.
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization.

## Intended uses
You can use the fine-tuned model for molecule optimization for downstream tasks. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you.
### How to use
Molecule optimization example:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt")
>>> sf_input = tokenizer("[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]", return_tensors="pt")
>>> # beam search
>>> molecules = model.generate(input_ids=sf_input["input_ids"],
attention_mask=sf_input["attention_mask"],
max_length=35,
min_length=5,
num_return_sequences=5,
num_beams=5)
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules]
['[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]']
```
### BibTeX entry and citation info
```bibtex
@inproceedings{fang2023domain,
author = {Yin Fang and
Ningyu Zhang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Domain-Agnostic Molecular Generation with Chemical Feedback},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=9rPyHyjfwP}
}
```
|
Stopwolf/Cerberus-7B-slerp
|
Stopwolf
| 2024-03-04T12:43:30Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:03:46Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3
model-index:
- name: Cerberus-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 17.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
---
# Cerberus-7B-slerp
Cerberus-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3](https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
- model: UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3
layer_range: [0, 32]
merge_method: slerp
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Cerberus-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.46|
|AI2 Reasoning Challenge (25-Shot)|69.54|
|HellaSwag (10-Shot) |87.33|
|MMLU (5-Shot) |63.25|
|TruthfulQA (0-shot) |61.35|
|Winogrande (5-shot) |81.29|
|GSM8k (5-shot) |17.97|
|
s3nh/Mistral_Sonyichi-7B-slerp
|
s3nh
| 2024-03-04T12:39:38Z | 55 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"base_model:merge:EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-7B",
"base_model:SanjiWatsuki/Sonya-7B",
"base_model:merge:SanjiWatsuki/Sonya-7B",
"license:openrail",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T14:43:17Z |
---
language:
- en
license: openrail
library_name: transformers
tags:
- mergekit
- merge
base_model:
- SanjiWatsuki/Sonya-7B
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
- SanjiWatsuki/Kunoichi-7B
pipeline_tag: text-generation
model-index:
- name: Mistral_Sonyichi-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.43
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
---
# merged

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: SanjiWatsuki/Sonya-7B
- layer_range: [0, 32]
model: SanjiWatsuki/Kunoichi-7B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Mistral_Sonyichi-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.52|
|AI2 Reasoning Challenge (25-Shot)|67.49|
|HellaSwag (10-Shot) |86.43|
|MMLU (5-Shot) |63.58|
|TruthfulQA (0-shot) |63.25|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |63.84|
|
s3nh/Severusectum-7B-DPO
|
s3nh
| 2024-03-04T12:39:33Z | 125 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:FelixChao/Sectumsempra-7B-DPO",
"base_model:merge:FelixChao/Sectumsempra-7B-DPO",
"base_model:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T22:51:05Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- mergekit
- merge
base_model:
- FelixChao/Sectumsempra-7B-DPO
- FelixChao/WestSeverus-7B-DPO-v2
pipeline_tag: text-generation
model-index:
- name: Severusectum-7B-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
---

# Severusectum-7B-DPO
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [FelixChao/Sectumsempra-7B-DPO](https://huggingface.co/FelixChao/Sectumsempra-7B-DPO)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: FelixChao/Sectumsempra-7B-DPO
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: FelixChao/Sectumsempra-7B-DPO
- layer_range: [0, 32]
model: FelixChao/WestSeverus-7B-DPO-v2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Severusectum-7B-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.18|
|AI2 Reasoning Challenge (25-Shot)|71.50|
|HellaSwag (10-Shot) |88.55|
|MMLU (5-Shot) |64.79|
|TruthfulQA (0-shot) |72.45|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |70.51|
|
s3nh/SeverusWestLake-7B-DPO
|
s3nh
| 2024-03-04T12:39:21Z | 63 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:FelixChao/Sectumsempra-7B-DPO",
"base_model:merge:FelixChao/Sectumsempra-7B-DPO",
"base_model:cognitivecomputations/WestLake-7B-v2-laser",
"base_model:merge:cognitivecomputations/WestLake-7B-v2-laser",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-04T20:59:38Z |
---
license: mit
library_name: transformers
tags:
- mergekit
- merge
base_model:
- FelixChao/Sectumsempra-7B-DPO
- cognitivecomputations/WestLake-7B-v2-laser
pipeline_tag: text-generation
model-index:
- name: SeverusWestLake-7B-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 86.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO
name: Open LLM Leaderboard
---
# SeverusWestLake-7B-DPO

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [FelixChao/Sectumsempra-7B-DPO](https://huggingface.co/FelixChao/Sectumsempra-7B-DPO)
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: FelixChao/Sectumsempra-7B-DPO
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: FelixChao/Sectumsempra-7B-DPO
- layer_range: [0, 32]
model: cognitivecomputations/WestLake-7B-v2-laser
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__SeverusWestLake-7B-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.42|
|AI2 Reasoning Challenge (25-Shot)|72.18|
|HellaSwag (10-Shot) |88.94|
|MMLU (5-Shot) |64.65|
|TruthfulQA (0-shot) |71.49|
|Winogrande (5-shot) |86.11|
|GSM8k (5-shot) |69.14|
|
ryandt/MusingCaterpillar
|
ryandt
| 2024-03-04T12:38:38Z | 1,419 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:ryandt/mistral_symbolicLogic_5_7_9_short",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T08:06:52Z |
---
license: mit
datasets:
- ryandt/mistral_symbolicLogic_5_7_9_short
model-index:
- name: MusingCaterpillar
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryandt/MusingCaterpillar
name: Open LLM Leaderboard
---
Finetune of CultriX/MistralTrix-v1 on Symbolic Logic content from Lewis Carrol (at a very low learning rate because of the very small dataset - I'm just experimenting and have no idea if this was effective at changing the model output).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ryandt__MusingCaterpillar)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.33|
|AI2 Reasoning Challenge (25-Shot)|72.53|
|HellaSwag (10-Shot) |88.34|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |70.93|
|Winogrande (5-shot) |80.66|
|GSM8k (5-shot) |62.24|
|
DataVare/outlook-pst-compact-compress-tool
|
DataVare
| 2024-03-04T12:37:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-04T12:36:35Z |
Here is the DataVare Outlook PST Compact Tool that compresses oversized or large PST files and secures PST files from corruption. It has various numbers of advanced features that help reduce the size of Outlook mailboxes without deleting any data. The software can shrink any size of PST files without any restrictions. It supports both ANSI and Unicode PST files. It comes with advanced features users are enable to extract attachments and store them separately. It's easy to and understandable for any kind of user whether they are technical or not. The utility supports all versions of Outlook like- 2003, 2007, 2010, 2013, 2016, 2019, 2021, etc. For the satisfaction of users, we also offer a free demo pack, that helps users to know more about it's functionality and working process.
Visit Here To Know More - https://www.datavare.com/software/outlook-pst-compact-expert.html
|
Neomedallion/a2c-PandaReachDense-v3
|
Neomedallion
| 2024-03-04T12:36:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T12:32:15Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chanchan7/vicuna-qlora
|
chanchan7
| 2024-03-04T12:34:28Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-03T21:26:52Z |
---
license: llama2
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: lmsys/vicuna-7b-v1.5
model-index:
- name: vicuna-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vicuna-qlora
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9771 | 1.0 | 570 | 0.9819 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
kiranp/q-FrozenLake-v1-4x4-noSlippery
|
kiranp
| 2024-03-04T12:33:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-04T12:33:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kiranp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
axel-rda/ARIA-70B-V2-Quantized
|
axel-rda
| 2024-03-04T12:27:21Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-04T12:14:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Weyaxi/Draco-8x7B
|
Weyaxi
| 2024-03-04T12:25:16Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"openchat",
"hermes",
"dolphin",
"bagel",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:04:50Z |
---
license: apache-2.0
tags:
- moe
- openchat
- hermes
- dolphin
- bagel
model-index:
- name: Draco-8x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.65
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B
name: Open LLM Leaderboard
---

# 💫 Draco-8x7B
This is the model for Draco-8x7B. I used [this repo](https://bit.ly/weyaxi-moe-repo) to make this MOE model.
This model's experts are not using any merged models.
# 📚 Other branches (Number of Experts Per Token)
Other branches that this repository contains differ only slightly (from a git diff perspective) in terms of the number of experts per token.
Usually, a higher value for the number of experts per token will result in better performance, but it may also lead to increased inference time.
| Number of experts per token | Link of the branch |
| ---------------------------- | -------------------------------------------------------------------------------------------|
| 2 | [Main](https://huggingface.co/Weyaxi/Draco-8x7B/tree/main) |
| 3 | [3-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/3-experts-per-token) |
| 4 | [4-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/4-experts-per-token) |
| 6 | [6-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/6-experts-per-token) |
| 8 | [8-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/8-experts-per-token) |
# 💬 Prompt Template(s):
This model includes many models, so providing only one prompt template is not enough. You can use and try these prompt templates and decide which works best for you.
**Note:** The current chat template in the tokenizer config is set to [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)'s chat template.
**Note 2:** It is also important to note that [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) is using many prompt templates other than I provided. You can visit [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) to learn more about this templates.
### GPT4 Correct
Used in [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106), [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
```
GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|>
```
### ChatML:
Used in [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B), [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1), [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser), [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Math Alpaca
Used in [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response: Let's think step by step.
```
# 🛠️ Yaml Config
<details><summary>See config</summary>
```yaml
base_model: openchat/openchat-3.5-0106
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: openchat/openchat-3.5-0106
positive_prompts: # General (Mistral finetune)
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: teknium/OpenHermes-2.5-Mistral-7B
positive_prompts: # General (Mistral finetune)
- "interact"
- "converse"
- "respond"
- "express"
- source_model: jondurbin/bagel-dpo-7b-v0.1
positive_prompts: # Science (Mistral finetune)
- "science"
- "biology"
- "chemistry"
- "physics"
- "Newton's laws"
- "scientific method"
- "periodic table"
- "photosynthesis process"
- source_model: meta-math/MetaMath-Mistral-7B
positive_prompts: # Math (Mistral finetune)
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- source_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
positive_prompts: # Uncensored (Mistral finetune)
- "dolphin"
- "uncensored"
- "unbiased"
- "unfiltered"
- "unrestricted"
- "offensive"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts: # Code (openchat-3.5-1210 finetune)
- "code"
- "script"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: senseable/WestLake-7B-v2
positive_prompts: # Roleplay (Unknown finetune)
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- "act as"
- "you are"
- source_model: snorkelai/Snorkel-Mistral-PairRM-DPO
positive_prompts: # Question Answering (? Mistral-7B-Instruct-v0.2 finetune ?)
- "what happens"
- "what is"
- "what can"
- "why"
- "who"
- "can a"
```
</details><br>
# 🔄 Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Draco-8x7B-GPTQ](https://huggingface.co/TheBloke/Draco-8x7B-GPTQ)
##### GGUF
- [TheBloke/Draco-8x7B-GGUF](https://huggingface.co/TheBloke/Draco-8x7B-GGUF)
##### AWQ
- [TheBloke/Draco-8x7B-AWQ](https://huggingface.co/TheBloke/Draco-8x7B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Draco-8x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.89|
|AI2 Reasoning Challenge (25-Shot)|65.02|
|HellaSwag (10-Shot) |85.24|
|MMLU (5-Shot) |64.96|
|TruthfulQA (0-shot) |62.65|
|Winogrande (5-shot) |80.66|
|GSM8k (5-shot) |66.79|
|
lodrick-the-lafted/Platyboros-Instruct-7B
|
lodrick-the-lafted
| 2024-03-04T12:24:31Z | 95 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-21T22:05:14Z |
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Platyboros-Instruct-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B/resolve/main/platyboros.png>
# Platyboros-Instruct-7B
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with [jondurbin/airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) and [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), in Alpaca format.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Platyboros-Instruct-7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Platyboros-Instruct-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.19|
|AI2 Reasoning Challenge (25-Shot)|57.76|
|HellaSwag (10-Shot) |82.59|
|MMLU (5-Shot) |62.05|
|TruthfulQA (0-shot) |60.92|
|Winogrande (5-shot) |78.14|
|GSM8k (5-shot) |43.67|
|
lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
|
lodrick-the-lafted
| 2024-03-04T12:23:08Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-100K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T10:06:42Z |
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-100K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-B-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-B-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-100K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K)
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an orange pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-B-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.65|
|AI2 Reasoning Challenge (25-Shot)|59.47|
|HellaSwag (10-Shot) |82.95|
|MMLU (5-Shot) |62.15|
|TruthfulQA (0-shot) |61.49|
|Winogrande (5-shot) |77.43|
|GSM8k (5-shot) |44.43|
|
lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
|
lodrick-the-lafted
| 2024-03-04T12:23:04Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-40K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T10:12:08Z |
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-40K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-D-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 58.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.02
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-D-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-v0.2](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2)
Basically the same thing as GHP-A.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an pumpkin pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-D-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.24|
|AI2 Reasoning Challenge (25-Shot)|58.87|
|HellaSwag (10-Shot) |82.89|
|MMLU (5-Shot) |61.96|
|TruthfulQA (0-shot) |61.02|
|Winogrande (5-shot) |77.43|
|GSM8k (5-shot) |43.29|
|
lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
|
lodrick-the-lafted
| 2024-03-04T12:22:48Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-40K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T09:57:26Z |
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-40K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-A-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-A-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-v0.2](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2)
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an pumpkin pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-A-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.23|
|AI2 Reasoning Challenge (25-Shot)|59.30|
|HellaSwag (10-Shot) |82.89|
|MMLU (5-Shot) |62.00|
|TruthfulQA (0-shot) |61.08|
|Winogrande (5-shot) |77.66|
|GSM8k (5-shot) |42.46|
|
lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
|
lodrick-the-lafted
| 2024-03-04T12:22:37Z | 46 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-217K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T10:11:18Z |
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-217K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-C-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 58.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-C-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-217K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K)
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an peach pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-C-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.39|
|AI2 Reasoning Challenge (25-Shot)|58.96|
|HellaSwag (10-Shot) |82.77|
|MMLU (5-Shot) |62.08|
|TruthfulQA (0-shot) |60.87|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |43.90|
|
apatidar0/t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20
|
apatidar0
| 2024-03-04T12:18:12Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization dutch for keyword extraction from new",
"generated_from_trainer",
"base_model:yhavinga/t5-base-dutch",
"base_model:finetune:yhavinga/t5-base-dutch",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-04T09:38:20Z |
---
license: apache-2.0
base_model: yhavinga/t5-base-dutch
tags:
- summarization dutch for keyword extraction from new
- generated_from_trainer
model-index:
- name: t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20
This model is a fine-tuned version of [yhavinga/t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4839
- eval_rouge1: 0.8041
- eval_rouge2: 0.6483
- eval_rougeL: 0.7937
- eval_rougeLsum: 0.7939
- eval_runtime: 141.5673
- eval_samples_per_second: 6.435
- eval_steps_per_second: 0.537
- epoch: 18.0
- step: 3672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pedro1111/layout3
|
pedro1111
| 2024-03-04T12:17:01Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-04T10:56:04Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layout3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layout3
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6334
- Precision: 0.8935
- Recall: 0.9131
- F1: 0.9032
- Accuracy: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.33 | 100 | 0.6874 | 0.7820 | 0.8073 | 0.7944 | 0.7841 |
| No log | 2.67 | 200 | 0.4485 | 0.8321 | 0.8838 | 0.8571 | 0.8474 |
| No log | 4.0 | 300 | 0.4403 | 0.8579 | 0.9086 | 0.8825 | 0.8414 |
| No log | 5.33 | 400 | 0.4593 | 0.8452 | 0.9056 | 0.8743 | 0.8341 |
| 0.5531 | 6.67 | 500 | 0.4881 | 0.8732 | 0.9170 | 0.8946 | 0.8575 |
| 0.5531 | 8.0 | 600 | 0.5332 | 0.8761 | 0.9101 | 0.8928 | 0.8547 |
| 0.5531 | 9.33 | 700 | 0.5910 | 0.8894 | 0.9106 | 0.8999 | 0.8517 |
| 0.5531 | 10.67 | 800 | 0.5914 | 0.8909 | 0.9131 | 0.9019 | 0.8557 |
| 0.5531 | 12.0 | 900 | 0.6127 | 0.9001 | 0.9180 | 0.9090 | 0.8614 |
| 0.1245 | 13.33 | 1000 | 0.6334 | 0.8935 | 0.9131 | 0.9032 | 0.8586 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.0+cu118
- Datasets 2.17.1
- Tokenizers 0.13.2
|
vicgalle/OpenBeagle-11B
|
vicgalle
| 2024-03-04T12:16:12Z | 51 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:vicgalle/OpenHermesPreferences-1k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-27T21:51:07Z |
---
license: apache-2.0
library_name: transformers
datasets:
- vicgalle/OpenHermesPreferences-1k
model-index:
- name: OpenBeagle-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenBeagle-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.85|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |88.76|
|MMLU (5-Shot) |66.94|
|TruthfulQA (0-shot) |67.01|
|Winogrande (5-shot) |83.50|
|GSM8k (5-shot) |66.41|
|
kyujinpy/Sakura-SOLAR-Instruct
|
kyujinpy
| 2024-03-04T12:15:57Z | 4,768 | 28 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T10:30:56Z |
---
language:
- en
license: cc-by-nc-sa-4.0
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLAR-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
---
# **Sakura-SOLAR-Instruct**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
I shared the information about my model. (training and code)
**Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).**
**Blog**
- [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
> Rank1 2023.12.27 PM 11:50
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLAR-Instruct"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.40|
|AI2 Reasoning Challenge (25-Shot)|70.99|
|HellaSwag (10-Shot) |88.42|
|MMLU (5-Shot) |66.33|
|TruthfulQA (0-shot) |71.79|
|Winogrande (5-shot) |83.66|
|GSM8k (5-shot) |65.20|
|
ajibawa-2023/Code-290k-6.7B-Instruct
|
ajibawa-2023
| 2024-03-04T12:15:35Z | 113 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"en",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-25T05:38:43Z |
---
language:
- en
license: other
tags:
- code
datasets:
- ajibawa-2023/Code-290k-ShareGPT
model-index:
- name: Code-290k-6.7B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 51.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.95
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct
name: Open LLM Leaderboard
---
**Code-290k-6.7B-Instruct**
This model is trained on [DeepSeek-Coder-6.7B-Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct). I have used my existing dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) for training purpose.
It is trained on around 290000 set of codes. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose.
This model utilises Alpaca format. Besides code generation it will also give you explanation.
**Training:**
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 85 hours. DeepSeek-Coder codebase and DeepSpeed was used for training purpose.
This is a full fine tuned model.
Links for quantized models are given below.
**Exllama**
Exllama v2:[Link](https://huggingface.co/bartowski/Code-290k-6.7B-Instruct-exl2)
Extremely thankful to [Bartowski](https://huggingface.co/bartowski) for making Quantized version of the model.
**Example Prompt**:
```
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.
### Instruction:
{instruction}
### Response:
```
You can modify above Prompt as per your requirement. I have used Alpaca format.
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Examples**
1. **Bayes Theorem - Python**

2. **Fermat's little theorem**

3. **The Arrhenius equation using R**

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Code-290k-6.7B-Instruct)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.64|
|AI2 Reasoning Challenge (25-Shot)|34.90|
|HellaSwag (10-Shot) |51.99|
|MMLU (5-Shot) |34.89|
|TruthfulQA (0-shot) |41.95|
|Winogrande (5-shot) |52.64|
|GSM8k (5-shot) | 3.49|
|
Felladrin/Minueza-32M-UltraChat
|
Felladrin
| 2024-03-04T12:15:26Z | 80 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Felladrin/ChatML-ultrachat_200k",
"base_model:Felladrin/Minueza-32M-Base",
"base_model:finetune:Felladrin/Minueza-32M-Base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-27T13:26:32Z |
---
language:
- en
license: apache-2.0
datasets:
- HuggingFaceH4/ultrachat_200k
- Felladrin/ChatML-ultrachat_200k
base_model: Felladrin/Minueza-32M-Base
pipeline_tag: text-generation
widget:
- messages:
- role: system
content: You are a career counselor. The user will provide you with an individual
looking for guidance in their professional life, and your task is to assist
them in determining what careers they are most suited for based on their skills,
interests, and experience. You should also conduct research into the various
options available, explain the job market trends in different industries, and
advice on which qualifications would be beneficial for pursuing particular fields.
- role: user
content: Heya!
- role: assistant
content: Hi! How may I help you?
- role: user
content: I am interested in developing a career in software engineering. What
would you recommend me to do?
- messages:
- role: user
content: Morning!
- role: assistant
content: Good morning! How can I help you today?
- role: user
content: Could you give me some tips for becoming a healthier person?
- messages:
- role: user
content: Write the specs of a game about mages in a fantasy world.
- messages:
- role: user
content: Tell me about the pros and cons of social media.
- messages:
- role: system
content: You are a highly knowledgeable and friendly assistant. Your goal is to
understand and respond to user inquiries with clarity. Your interactions are
always respectful, helpful, and focused on delivering the most accurate information
to the user.
- role: user
content: Hey! Got a question for you!
- role: assistant
content: Sure! What's it?
- role: user
content: What are some potential applications for quantum computing?
inference:
parameters:
max_new_tokens: 250
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
model-index:
- name: Minueza-32M-UltraChat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 21.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat
name: Open LLM Leaderboard
---
# Minueza-32M-UltraChat: A chat model with 32 million parameters
- Base model: [Felladrin/Minueza-32M-Base](https://huggingface.co/Felladrin/Minueza-32M-Base)
- Dataset: [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrachat_200k)] [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- License: [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-UltraChat/resolve/main/license.txt)
- Availability in other ML formats:
- GGUF: [Felladrin/gguf-Minueza-32M-UltraChat](https://huggingface.co/Felladrin/gguf-Minueza-32M-UltraChat)
- ONNX: [Felladrin/onnx-Minueza-32M-UltraChat](https://huggingface.co/Felladrin/onnx-Minueza-32M-UltraChat)
## Recommended Prompt Format
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended Inference Parameters
```yml
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
```
## Usage Example
```python
from transformers import pipeline
generate = pipeline("text-generation", "Felladrin/Minueza-32M-UltraChat")
messages = [
{
"role": "system",
"content": "You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user.",
},
{
"role": "user",
"content": "Hey! Got a question for you!",
},
{
"role": "assistant",
"content": "Sure! What's it?",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = generate(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.65,
top_k=35,
top_p=0.55,
repetition_penalty=1.176,
)
print(output[0]["generated_text"])
```
## How it was trained
This model was trained with [SFTTrainer](https://huggingface.co/docs/trl/main/en/sft_trainer) using the following settings:
| Hyperparameter | Value |
| :--------------------- | :-------------------------------------------- |
| Learning rate | 2e-5 |
| Total train batch size | 16 |
| Max. sequence length | 2048 |
| Weight decay | 0 |
| Warmup ratio | 0.1 |
| Optimizer | Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| Scheduler | cosine |
| Seed | 42 |
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Minueza-32M-UltraChat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.97|
|AI2 Reasoning Challenge (25-Shot)|21.08|
|HellaSwag (10-Shot) |26.95|
|MMLU (5-Shot) |26.08|
|TruthfulQA (0-shot) |47.70|
|Winogrande (5-shot) |51.78|
|GSM8k (5-shot) | 0.23|
|
kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
|
kyujinpy
| 2024-03-04T12:15:16Z | 3,613 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:argilla/distilabel-math-preference-dpo",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T16:11:41Z |
---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- argilla/distilabel-math-preference-dpo
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLAR-Instruct-DPO-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.76
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
---
# **Sakura-SOLAR-Instruct-DPO-v2**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using DPO method.
With [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).
I shared the information about my model. (training and code)
Please see: ⭐[Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLAR-Instruct-DPO-v2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct-DPO-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.14|
|AI2 Reasoning Challenge (25-Shot)|70.90|
|HellaSwag (10-Shot) |88.41|
|MMLU (5-Shot) |66.48|
|TruthfulQA (0-shot) |71.86|
|Winogrande (5-shot) |83.43|
|GSM8k (5-shot) |63.76|
|
vicgalle/NeuralBeagle-11B-truthy
|
vicgalle
| 2024-03-04T12:14:53Z | 51 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-11T17:52:37Z |
---
license: apache-2.0
tags:
- merge
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: NeuralBeagle-11B-truthy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy
name: Open LLM Leaderboard
---
# NeuralBeagle-11B
DPO'd from vicgalle/franken-Beagle-11B, a Beagle-like model upscaled to 11B.
It is a frankenmerge model created using mergekit. Then, we applied DPO over a high-quality preference dataset.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__NeuralBeagle-11B-truthy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.06|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |87.86|
|MMLU (5-Shot) |63.11|
|TruthfulQA (0-shot) |75.92|
|Winogrande (5-shot) |82.08|
|GSM8k (5-shot) |49.73|
|
vicgalle/OpenHermes-Qwen1.5-1.8B
|
vicgalle
| 2024-03-04T12:14:25Z | 52 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:vicgalle/OpenHermesPreferences-1k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T21:58:44Z |
---
license: apache-2.0
library_name: transformers
datasets:
- vicgalle/OpenHermesPreferences-1k
model-index:
- name: OpenHermes-Qwen1.5-1.8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 59.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.28
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B
name: Open LLM Leaderboard
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenHermes-Qwen1.5-1.8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.95|
|AI2 Reasoning Challenge (25-Shot)|37.80|
|HellaSwag (10-Shot) |59.73|
|MMLU (5-Shot) |45.80|
|TruthfulQA (0-shot) |42.28|
|Winogrande (5-shot) |60.22|
|GSM8k (5-shot) |23.88|
|
vicgalle/zephyr-7b-truthy
|
vicgalle
| 2024-03-04T12:13:48Z | 114 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T13:48:48Z |
---
license: apache-2.0
library_name: transformers
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: zephyr-7b-truthy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.31
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.93|
|AI2 Reasoning Challenge (25-Shot)|60.75|
|HellaSwag (10-Shot) |84.64|
|MMLU (5-Shot) |59.53|
|TruthfulQA (0-shot) |63.31|
|Winogrande (5-shot) |77.90|
|GSM8k (5-shot) |25.47|
|
ajibawa-2023/Code-13B
|
ajibawa-2023
| 2024-03-04T12:13:37Z | 1,508 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:ajibawa-2023/Code-74k-ShareGPT",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T18:41:07Z |
---
language:
- en
license: cc-by-nc-nd-4.0
tags:
- code
datasets:
- ajibawa-2023/Code-74k-ShareGPT
model-index:
- name: Code-13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.46
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B
name: Open LLM Leaderboard
---
**Code-13B**
Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code.
This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 74000 set of codes. Each set having 2 conversations.
Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation is used for training purpose. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation.
I have released the new data [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) on which this Model is trained.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 42 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGUF & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/Code-13B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/Code-13B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/Code-13B-AWQ)
Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of model.
**Example Prompt:**
```
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
1. Navier-Stokes Equation Solver

2. KSC Complexity

3. GO

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Code-13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |54.81|
|AI2 Reasoning Challenge (25-Shot)|57.34|
|HellaSwag (10-Shot) |83.28|
|MMLU (5-Shot) |53.17|
|TruthfulQA (0-shot) |42.46|
|Winogrande (5-shot) |73.56|
|GSM8k (5-shot) |19.03|
|
SteelStorage/VerB-Etheria-55b
|
SteelStorage
| 2024-03-04T12:13:27Z | 9 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Etheria",
"base_model:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"base_model:merge:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"base_model:one-man-army/UNA-34Beagles-32K-bf16-v1",
"base_model:merge:one-man-army/UNA-34Beagles-32K-bf16-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T10:59:59Z |
---
license: apache-2.0
tags:
- mergekit
- merge
- Etheria
base_model:
- brucethemoose/Yi-34B-200K-DARE-megamerge-v8
- one-man-army/UNA-34Beagles-32K-bf16-v1
model-index:
- name: VerB-Etheria-55b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.52
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/VerB-Etheria-55b
name: Open LLM Leaderboard
---
# VerB-Etheria-55b

An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models, this is Version B or VerB, it is a Double
Model Passthrough merge. with a 50/50 split between high performing models.
# Roadmap:
Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge.
1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge.
2: If the single model performs well I will generate a 55b of the most performant model the either Slerp or Dare ties merge.
3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
slices:
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [0, 14]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [7, 21]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [15, 29]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [22, 36]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [30, 44]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [37, 51]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [45, 59]
merge_method: passthrough
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Steelskull__VerB-Etheria-55b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.83|
|AI2 Reasoning Challenge (25-Shot)|65.96|
|HellaSwag (10-Shot) |81.48|
|MMLU (5-Shot) |73.78|
|TruthfulQA (0-shot) |57.52|
|Winogrande (5-shot) |75.45|
|GSM8k (5-shot) |28.81|
|
vicgalle/franken-SOLAR-18B-v1.0
|
vicgalle
| 2024-03-04T12:12:55Z | 65 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"solar",
"conversational",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:merge:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:merge:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T18:30:07Z |
---
license: apache-2.0
tags:
- mergekit
- merge
- solar
base_model:
- upstage/SOLAR-10.7B-Instruct-v1.0
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
model-index:
- name: franken-SOLAR-18B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
---
# vicgalle/franken-SOLAR-18B-v1.0
This is a SOLAR-like model upscaled to 18B.
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.

Evaluations coming soon!
This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.
Quantized GGUF variants here https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0-GGUF
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
* [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [0, 12]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [6, 18]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [13, 25]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [19, 31]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [26, 38]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [32, 44]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [39, 48]
merge_method: passthrough
dtype: float16
```
### Usage
You can use the provided template:
```
tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)
conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
output_text = tokenizer.decode(outputs[0])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__franken-SOLAR-18B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.03|
|AI2 Reasoning Challenge (25-Shot)|65.53|
|HellaSwag (10-Shot) |86.45|
|MMLU (5-Shot) |63.72|
|TruthfulQA (0-shot) |62.14|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |45.79|
|
ajibawa-2023/SlimOrca-13B
|
ajibawa-2023
| 2024-03-04T12:12:50Z | 1,498 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:ajibawa-2023/SlimOrca-ShareGPT",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-27T17:56:25Z |
---
language:
- en
license: cc-by-nc-nd-4.0
datasets:
- Open-Orca/SlimOrca
- ajibawa-2023/SlimOrca-ShareGPT
model-index:
- name: SlimOrca-13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
---
**SlimOrca-13B: A General Purpose Intelligent Model**
This Model is trained on refined version of SlimOrca made available by [Open-Orca](https://huggingface.co/Open-Orca) team.
The idea was to check how this Model will perform in the absence of "system" prompt/instruction.
This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis,
Context & Hypothesis, Reviews, Erotic story generation etc.
It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you
generate.
It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this [data](https://huggingface.co/datasets/ajibawa-2023/SlimOrca-ShareGPT).
All the credit goes to the Open-Orca team for releasing SlimOrca dataset.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose.
Entire data is trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-AWQ)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available.
**Example Prompt:**
```
This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Example 1

Example 2

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.39|
|AI2 Reasoning Challenge (25-Shot)|60.15|
|HellaSwag (10-Shot) |81.40|
|MMLU (5-Shot) |57.04|
|TruthfulQA (0-shot) |49.37|
|Winogrande (5-shot) |74.43|
|GSM8k (5-shot) |39.95|
|
julycodes/alpaca-clinical-plan-generation
|
julycodes
| 2024-03-04T12:12:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-04T12:12:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vicgalle/OpenHermes-Gemma-2B
|
vicgalle
| 2024-03-04T12:12:35Z | 55 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:vicgalle/OpenHermesPreferences-1k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T17:55:15Z |
---
license: apache-2.0
library_name: transformers
datasets:
- vicgalle/OpenHermesPreferences-1k
model-index:
- name: OpenHermes-Gemma-2B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 49.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Gemma-2B
name: Open LLM Leaderboard
---
# OpenHermes-Gemma-2B
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |46.36|
|AI2 Reasoning Challenge (25-Shot)|49.32|
|HellaSwag (10-Shot) |72.26|
|MMLU (5-Shot) |37.67|
|TruthfulQA (0-shot) |41.69|
|Winogrande (5-shot) |65.11|
|GSM8k (5-shot) |12.13|
|
kyujinpy/PlatYi-34B-Llama-Q-v3
|
kyujinpy
| 2024-03-04T12:12:06Z | 1,448 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-11T14:57:14Z |
---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
model-index:
- name: PlatYi-34B-Llama-Q-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 6.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3
name: Open LLM Leaderboard
---
# **PlatYi-34B-Llama-Q-v3**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-Llama-Q-v3 is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Fix some bugs
- Before model, there is some mistakes.
- I modified the templates and warmup_steps.
## Notice
While training, I used Q-LoRA.
The lora_r values is 64.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PlatYi-34B-Llama-Q-v3 | 61.15 | 64.33 | 84.88 | 74.98 | 51.80 | 82.79 | 6.67 |
| PlatYi-34B-Llama-Q-v2 | 67.88 | 61.09 | 85.09 | 76.59 | 52.65 | 82.79 | 49.05 |
| PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
| [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-Llama-Q-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-Llama-Q-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.15|
|AI2 Reasoning Challenge (25-Shot)|64.33|
|HellaSwag (10-Shot) |84.88|
|MMLU (5-Shot) |74.98|
|TruthfulQA (0-shot) |51.80|
|Winogrande (5-shot) |84.21|
|GSM8k (5-shot) | 6.67|
|
vicgalle/NeuralBeagle-11B
|
vicgalle
| 2024-03-04T12:11:40Z | 52 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"11B",
"merge",
"conversational",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:vicgalle/franken-Beagle-11B",
"base_model:finetune:vicgalle/franken-Beagle-11B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T10:01:58Z |
---
license: apache-2.0
tags:
- dpo
- 11B
- merge
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model:
- vicgalle/franken-Beagle-11B
model-index:
- name: NeuralBeagle-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B
name: Open LLM Leaderboard
---
# NeuralBeagle-11B
DPO'd from vicgalle/franken-Beagle-11B, a Beagle-like model upscaled to 11B.
It is a frankenmerge model created using mergekit. Then, we applied DPO over a high-quality preference dataset.

## Sample generations
**Generate a movie review about the Pokémon 1 movie, focusing on BDSM.**
> Before delving into the realm of BDSOMatic interpretations within Pokémon: The First Movie (aka "Pokémon: Mewtwo's Revenge" in Japan), it is important to acknowledge that this animated feature is primarily intended for a younger audience. It follows the adventures of Ash Ketchum, Pikachu, and their friends as they encounter an artificially created Pokémon named Mewtwo. While certain aspects of the storyline and visuals could be stretched to include various subtexts or symbolism, our analysis will focus on a more playful and metaphorical exploration of BDSM concepts within this film's context.
Participants: Warning: The following review may contain light-hearted and imaginative interpretations that are not intended as serious critical analysis and should not be taken literally.
Within the Pokémon universe, Ash Ketchum and his loyal Pikachu find themselves thrust into a high-stakes scenario reminiscent of a BDSM "Domestic Discipline" (DD) fantasy. As Mewtwo, a genetically modified Pokémon with immense power and intellect, seeks vengeance against the scientists who created him as a tool for battle entertainment, he operates as an archetypal "Dominant" figure.
Mewtwo, born from scientific manipulation, is an embodiment of control, both over his own creation and his obedient army of Pokémon. He exercises mastery over his subjects, mirroring the role of a "Dom," or Dominant, in a BDSM dynamic.
Ash Ketchum, initially unaware of the manipulations that have brought him into this situation, can be seen as a submissive protagonist. His humility and naivety parallel those within the DD community who seek guidance from an experienced Dominant to help them understand and embrace their desires for submission.
In his encounter with Mewtwo, Ash is subjected to tests designed to exhibit his strength, loyalty, and unwavering determination - elements crucial in establishing trust between a submissive and a Dominant within a BDSM dynamic. This psychological battle symbolizes the process of earning a "sub's" consent and trust through displays of control and power.
## Prompt format
The template is included in tokenizer_config.json.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__NeuralBeagle-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.95|
|AI2 Reasoning Challenge (25-Shot)|73.29|
|HellaSwag (10-Shot) |87.61|
|MMLU (5-Shot) |63.80|
|TruthfulQA (0-shot) |71.36|
|Winogrande (5-shot) |82.64|
|GSM8k (5-shot) |58.98|
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.