modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jysssacc/mt0-base_adalora_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T20:36:51Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T20:30:56Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_adalora_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_adalora_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8655 | 1.0 | 157 | 0.0115 |
| 0.0402 | 2.0 | 314 | 0.1096 |
| 0.0996 | 3.0 | 471 | 0.0578 |
| 0.0749 | 4.0 | 628 | 0.0174 |
| 0.0274 | 5.0 | 785 | 0.0019 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TrungChinh/test_trainer
|
TrungChinh
| 2024-01-11T20:33:05Z | 175 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/xtremedistil-l6-h384-uncased",
"base_model:finetune:microsoft/xtremedistil-l6-h384-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-11T20:32:58Z |
---
license: mit
base_model: microsoft/xtremedistil-l6-h384-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1134
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.6990 | 0.4967 |
| No log | 2.0 | 20 | 0.6889 | 0.4967 |
| No log | 3.0 | 30 | 0.6503 | 0.7697 |
| No log | 4.0 | 40 | 0.4720 | 0.9276 |
| No log | 5.0 | 50 | 0.3175 | 0.9572 |
| No log | 6.0 | 60 | 0.2181 | 0.9770 |
| No log | 7.0 | 70 | 0.1761 | 0.9770 |
| No log | 8.0 | 80 | 0.1551 | 0.9770 |
| No log | 9.0 | 90 | 0.1427 | 0.9770 |
| No log | 10.0 | 100 | 0.1345 | 0.9770 |
| No log | 11.0 | 110 | 0.1341 | 0.9737 |
| No log | 12.0 | 120 | 0.1240 | 0.9770 |
| No log | 13.0 | 130 | 0.1214 | 0.9770 |
| No log | 14.0 | 140 | 0.1182 | 0.9770 |
| No log | 15.0 | 150 | 0.1164 | 0.9770 |
| No log | 16.0 | 160 | 0.1149 | 0.9770 |
| No log | 17.0 | 170 | 0.1141 | 0.9770 |
| No log | 18.0 | 180 | 0.1131 | 0.9770 |
| No log | 19.0 | 190 | 0.1127 | 0.9770 |
| No log | 20.0 | 200 | 0.1121 | 0.9770 |
| No log | 21.0 | 210 | 0.1119 | 0.9770 |
| No log | 22.0 | 220 | 0.1117 | 0.9770 |
| No log | 23.0 | 230 | 0.1128 | 0.9770 |
| No log | 24.0 | 240 | 0.1134 | 0.9770 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
textminr/ner-distilbert
|
textminr
| 2024-01-11T20:32:16Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-30T17:23:10Z |
---
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-distilbert
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
- Precision: 0.9988
- Recall: 0.9980
- F1: 0.9984
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0002 | 0.16 | 250 | 0.0011 | 0.9961 | 0.9980 | 0.9971 | 0.9996 |
| 0.0001 | 0.31 | 500 | 0.0008 | 0.9977 | 0.9977 | 0.9977 | 0.9997 |
| 0.0004 | 0.47 | 750 | 0.0005 | 0.9992 | 0.9977 | 0.9984 | 0.9998 |
| 0.0002 | 0.63 | 1000 | 0.0005 | 0.9984 | 0.9977 | 0.9980 | 0.9997 |
| 0.0002 | 0.79 | 1250 | 0.0003 | 0.9988 | 0.9980 | 0.9984 | 0.9998 |
| 0.0 | 0.94 | 1500 | 0.0003 | 0.9988 | 0.9980 | 0.9984 | 0.9998 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yunakuz/bert-base-cased-finetuned-wikitext2
|
yunakuz
| 2024-01-11T20:20:17Z | 55 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-11T19:59:29Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: yunakuz/bert-base-cased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yunakuz/bert-base-cased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9649
- Validation Loss: 6.9201
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.4277 | 7.0733 | 0 |
| 6.9649 | 6.9201 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
version-control/tf-1.0-1.13-prefix
|
version-control
| 2024-01-11T20:19:30Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"region:us"
] | null | 2024-01-11T16:40:18Z |
---
library_name: peft
base_model: bigcode/starcoderbase-1b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MaziyarPanahi/PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T20:19:17Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"maywell/PiVoT-10.7B-Mistral-v0.2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:13:49Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- maywell/PiVoT-10.7B-Mistral-v0.2
---
# PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp
PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [maywell/PiVoT-10.7B-Mistral-v0.2](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: maywell/PiVoT-10.7B-Mistral-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
stablediffusionapi/vrr
|
stablediffusionapi
| 2024-01-11T20:14:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-11T20:12:46Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# vrr API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "vrr"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/vrr)
Model link: [View model](https://modelslab.com/models/vrr)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "vrr",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
jysssacc/627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T20:11:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T20:04:54Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 20.4011 | 1.0 | 157 | 7.1647 |
| 3.9809 | 2.0 | 314 | 2.0607 |
| 1.856 | 3.0 | 471 | 0.7107 |
| 0.6764 | 4.0 | 628 | 0.3786 |
| 0.489 | 5.0 | 785 | 0.3088 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
crackalamoo/bardgpt
|
crackalamoo
| 2024-01-11T20:11:16Z | 0 | 0 | null |
[
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2023-07-23T21:54:37Z |
---
license: mit
language:
- en
pipeline_tag: text-generation
---
# BardGPT
BardGPT is a miniature GPT model (39M parameters) for generating poetry, coded from scratch in TensorFlow. In addition to a GPT-style architecture, BardGPT is able to process data specifically related to poetry, in particular rhymes and syllable counts.
To run BardGPT, simply download this repository and run `python model.py --load`. You will need numpy and tensorflow.
For more information, see the GitHub repository at [crackalamoo/bardgpt](https://github.com/crackalamoo/bardgpt)
or [my blog post](http://harysdalvi.com/bardgpt).
|
LaserNav/SophyAI-Mistral-7B-v3-GGUF
|
LaserNav
| 2024-01-11T20:11:08Z | 10 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"legal",
"+easa",
"+usv",
"it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-06T13:50:36Z |
---
license: apache-2.0
language:
- it
library_name: adapter-transformers
tags:
- legal
- +easa
- +usv
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A preview model for support safety and security at work. Fine tuned model in italian language with italian rules
## Model Details
<!-- Provide a longer summary of what this model is. -->
This model derived from Mistral-7b has been fine-tuned with a dataset dedicated to the regulations governing safety at work developed by our AI Teams. The SofyAI patented platform is a digital twin framework for develop an AI supervisor for implement safety and security workflow at work. more info available at : https://www.lasernavigation.it This is a preview version of our SophyAI-LLM model the fine tuning did in Italian Language , so this early preview could don’t work in other language.
- **Developed by:** [Laser Navigation srl]
- **Model type:** [Fine Tuned Mistral]
- **Language(s) (NLP):** [Italian]
- **License:** [BSD]
- **Finetuned from model [optional]:** [Mistral 7B]
|
Lanxi24/BBB-bbb
|
Lanxi24
| 2024-01-11T20:05:57Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-01-11T20:01:24Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/images (8).jfif
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# BBB-bbb
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Lanxi24/BBB-bbb/tree/main) them in the Files & versions tab.
|
Perselope/Taxi-v3_v3
|
Perselope
| 2024-01-11T20:01:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T20:01:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 8.18 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Perselope/Taxi-v3_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
ChrisKalahiki/ppo-LunarLander-v2
|
ChrisKalahiki
| 2024-01-11T20:01:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T20:01:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.31 +/- 12.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yunakuz/gpt2-finetuned-wikitext2
|
yunakuz
| 2024-01-11T19:58:17Z | 48 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T19:37:00Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: yunakuz/gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yunakuz/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4977
- Validation Loss: 6.3559
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.3101 | 6.7633 | 0 |
| 6.4977 | 6.3559 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Rafaelfr87/rl_course_vizdoom_health_gathering_supreme
|
Rafaelfr87
| 2024-01-11T19:58:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T19:57:56Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.48 +/- 6.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Rafaelfr87/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Perselope/Taxi-v3_v1
|
Perselope
| 2024-01-11T19:57:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T19:57:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.81 +/- 2.28
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Perselope/Taxi-v3_v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Perselope/Taxi-v3_0
|
Perselope
| 2024-01-11T19:56:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T19:56:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.89 +/- 2.37
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Perselope/Taxi-v3_0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
tiagoblima/mt5_base-qg-ap-nopeft
|
tiagoblima
| 2024-01-11T19:52:54Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:tiagoblima/qg_squad_v1_pt",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-10T10:05:05Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
datasets:
- tiagoblima/qg_squad_v1_pt
model-index:
- name: mt5_base-qg-ap-nopeft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_base-qg-ap-nopeft
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the tiagoblima/qg_squad_v1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8875 | 1.0 | 1615 | 1.6600 |
| 1.7353 | 2.0 | 3231 | 1.6089 |
| 1.6382 | 3.0 | 4847 | 1.5926 |
| 1.5789 | 4.0 | 6463 | 1.5868 |
| 1.5343 | 5.0 | 8075 | 1.5832 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
younoger/YGBNumbersBert-0.2
|
younoger
| 2024-01-11T19:51:19Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:younoger/autotrain-data-YGBNumbersBert-0.2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-11T19:51:10Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- younoger/autotrain-data-YGBNumbersBert-0.2
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.013029049150645733
f1: 0.9189189189189189
precision: 1.0
recall: 0.85
auc: 0.9999006951340615
accuracy: 0.997078870496592
|
jysssacc/bloomz-560m_adalora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T19:50:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-11T19:49:29Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: bloomz-560m_adalora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-560m_adalora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9547 | 1.0 | 157 | 4.2266 |
| 7.9904 | 2.0 | 314 | 11.8646 |
| 13.1393 | 3.0 | 471 | 25.8805 |
| 29.8465 | 4.0 | 628 | 29.5010 |
| 26.1422 | 5.0 | 785 | 8.1660 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_withPrompt_Seed103
|
behzadnet
| 2024-01-11T19:49:38Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-11T19:49:32Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Feluda/zephyr-finetuned-legal
|
Feluda
| 2024-01-11T19:40:20Z | 10 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:joelniklaus/legal_case_document_summarization",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-04T16:41:59Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-beta-GPTQ
model-index:
- name: zephyr-finetuned-legal
results: []
datasets:
- joelniklaus/legal_case_document_summarization
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-finetuned-legal
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the joelniklaus/legal_case_document_summarization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Adamas23/mt5-large-finetuned-onet
|
Adamas23
| 2024-01-11T19:38:22Z | 85 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-large",
"base_model:finetune:google/mt5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T19:34:32Z |
---
license: apache-2.0
base_model: google/mt5-large
tags:
- generated_from_trainer
model-index:
- name: mt5-large-finetuned-onet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-finetuned-onet
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | P | R | F1 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|
| No log | 0.99 | 124 | nan | 52.3833 | 58.8638 | 55.3814 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/627_roberta-base_fine_lr0.0005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T19:38:19Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T19:27:57Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: 627_roberta-base_fine_lr0.0005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_fine_lr0.0005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2947 | 1.0 | 157 | 0.3166 |
| 0.6822 | 2.0 | 314 | 0.3862 |
| 1.0581 | 3.0 | 471 | 1.2625 |
| 1.459 | 4.0 | 628 | 1.1606 |
| 0.9811 | 5.0 | 785 | 0.2764 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ContextualAI/archangel_csft_llama30b
|
ContextualAI
| 2024-01-11T19:36:09Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T03:16:03Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-kto_llama30b
|
ContextualAI
| 2024-01-11T19:36:05Z | 16 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T05:02:03Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>SFT+KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-dpo_llama30b
|
ContextualAI
| 2024-01-11T19:36:03Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T01:32:50Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>SFT+DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_kto_llama30b
|
ContextualAI
| 2024-01-11T19:35:57Z | 17 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T00:31:06Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_dpo_llama30b
|
ContextualAI
| 2024-01-11T19:35:56Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T22:54:43Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_ppo_llama30b
|
ContextualAI
| 2024-01-11T19:35:54Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T01:58:21Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama30b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_csft_llama13b
|
ContextualAI
| 2024-01-11T19:35:53Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T23:46:08Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-kto_llama13b
|
ContextualAI
| 2024-01-11T19:35:49Z | 1,583 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T04:27:25Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>SFT+KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-dpo_llama13b
|
ContextualAI
| 2024-01-11T19:35:48Z | 190 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-02T23:54:48Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>SFT+DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-ppo_llama13b
|
ContextualAI
| 2024-01-11T19:35:46Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T08:40:04Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>SFT+PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_slic_llama13b
|
ContextualAI
| 2024-01-11T19:35:45Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T21:26:13Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_kto_llama13b
|
ContextualAI
| 2024-01-11T19:35:42Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T00:21:05Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_ppo_llama13b
|
ContextualAI
| 2024-01-11T19:35:40Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T01:48:34Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_csft_llama7b
|
ContextualAI
| 2024-01-11T19:35:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T19:54:28Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-slic_llama7b
|
ContextualAI
| 2024-01-11T19:35:36Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T19:37:00Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>SFT+SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft_llama7b
|
ContextualAI
| 2024-01-11T19:35:30Z | 125 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T19:49:54Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>SFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_kto_llama7b
|
ContextualAI
| 2024-01-11T19:35:29Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T00:14:40Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_dpo_llama7b
|
ContextualAI
| 2024-01-11T19:35:28Z | 630 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T22:38:57Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_ppo_llama7b
|
ContextualAI
| 2024-01-11T19:35:27Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T01:42:23Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>llama7b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-slic_pythia12-0b
|
ContextualAI
| 2024-01-11T19:35:24Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T15:42:18Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia12-0b</b>
- optimized with the loss <b>SFT+SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-kto_pythia12-0b
|
ContextualAI
| 2024-01-11T19:35:22Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T03:35:52Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia12-0b</b>
- optimized with the loss <b>SFT+KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_slic_pythia12-0b
|
ContextualAI
| 2024-01-11T19:35:19Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T21:11:17Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia12-0b</b>
- optimized with the loss <b>SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_kto_pythia12-0b
|
ContextualAI
| 2024-01-11T19:35:16Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T00:06:01Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia12-0b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_dpo_pythia12-0b
|
ContextualAI
| 2024-01-11T19:35:15Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T22:30:26Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia12-0b</b>
- optimized with the loss <b>DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-slic_pythia6-9b
|
ContextualAI
| 2024-01-11T19:35:09Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T11:23:08Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SFT+SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_slic_pythia6-9b
|
ContextualAI
| 2024-01-11T19:35:03Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T21:04:51Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft_pythia6-9b
|
ContextualAI
| 2024-01-11T19:35:02Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T19:39:34Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_ppo_pythia6-9b
|
ContextualAI
| 2024-01-11T19:34:58Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T01:27:25Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_csft_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:57Z | 108 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T09:00:45Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-csft_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:56Z | 112 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T08:21:19Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>SFT+CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-slic_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:55Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T08:50:34Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>SFT+SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-dpo_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:53Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-02T22:44:10Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>SFT+DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_slic_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:50Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T21:00:05Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_dpo_pythia2-8b
|
ContextualAI
| 2024-01-11T19:34:47Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T22:19:21Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia2-8b</b>
- optimized with the loss <b>DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_csft_pythia1-4b
|
ContextualAI
| 2024-01-11T19:34:45Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T07:49:08Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia1-4b</b>
- optimized with the loss <b>CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-csft_pythia1-4b
|
ContextualAI
| 2024-01-11T19:34:44Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T07:25:13Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia1-4b</b>
- optimized with the loss <b>SFT+CSFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft-dpo_pythia1-4b
|
ContextualAI
| 2024-01-11T19:34:41Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-02T22:38:07Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia1-4b</b>
- optimized with the loss <b>SFT+DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_sft_pythia1-4b
|
ContextualAI
| 2024-01-11T19:34:37Z | 211 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T19:33:05Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia1-4b</b>
- optimized with the loss <b>SFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
LoneStriker/TowerInstruct-7B-v0.1-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-11T19:28:10Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-11T19:25:17Z |
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-7B-v0.1
## Model Details
### Model Description
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”)
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0][“generated_text”])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
## Bias, Risks, and Limitations
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
To be completed.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
jysssacc/bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T19:25:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-11T19:24:14Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7951 | 1.0 | 157 | 23.6896 |
| 7.958 | 2.0 | 314 | 12.3009 |
| 9.051 | 3.0 | 471 | 8.2580 |
| 7.2931 | 4.0 | 628 | 7.2896 |
| 7.103 | 5.0 | 785 | 7.2360 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
facebook/sam-vit-large
|
facebook
| 2024-01-11T19:23:46Z | 56,740 | 28 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"sam",
"mask-generation",
"vision",
"arxiv:2304.02643",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
mask-generation
| 2023-04-19T14:17:03Z |
---
license: apache-2.0
tags:
- vision
---
# Model Card for Segment Anything Model (SAM) - ViT Large (ViT-L) version
<p>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-architecture.png" alt="Model architecture">
<em> Detailed architecture of Segment Anything Model (SAM).</em>
</p>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
# TL;DR
[Link to original repository](https://github.com/facebookresearch/segment-anything)
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://huggingface.co/facebook/sam-vit-huge/discussions/7" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> |
|---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
The abstract of the paper states:
> We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
# Model Details
The SAM model is made up of 3 modules:
- The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used.
- The `PromptEncoder`: generates embeddings for points and bounding boxes
- The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed
- The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`.
# Usage
## Prompted-Mask-Generation
```python
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
model = SamModel.from_pretrained("facebook/sam-vit-large")
processor = SamProcessor.from_pretrained("facebook/sam-vit-large")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D localization of a window
```
```python
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
scores = outputs.iou_scores
```
Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844).
For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example!
## Automatic-Mask-Generation
The model can be used for generating segmentation masks in a "zero-shot" fashion, given an input image. The model is automatically prompt with a grid of `1024` points
which are all fed to the model.
The pipeline is made for automatic mask generation. The following snippet demonstrates how easy you can run it (on any device! Simply feed the appropriate `points_per_batch` argument)
```python
from transformers import pipeline
generator = pipeline("mask-generation", device = 0, points_per_batch = 256)
image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
outputs = generator(image_url, points_per_batch = 256)
```
Now to display the image:
```python
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
def show_mask(mask, ax, random_color=False):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
plt.imshow(np.array(raw_image))
ax = plt.gca()
for mask in outputs["masks"]:
show_mask(mask, ax=ax, random_color=True)
plt.axis("off")
plt.show()
```
# Citation
If you use this model, please use the following BibTeX entry.
```
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
|
LoneStriker/TowerInstruct-7B-v0.1-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-11T19:17:47Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-11T19:15:50Z |
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-7B-v0.1
## Model Details
### Model Description
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”)
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0][“generated_text”])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
## Bias, Risks, and Limitations
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
To be completed.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
omarelsayeed/e5_base_tsdae
|
omarelsayeed
| 2024-01-11T19:16:36Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-11T19:15:56Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9162 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingDenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 5
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Lanxi24/AAA-aaa
|
Lanxi24
| 2024-01-11T19:16:10Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-01-11T19:14:58Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/images (8).jfif
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# ns-NS
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Lanxi24/AAA-aaa/tree/main) them in the Files & versions tab.
|
LoneStriker/TowerInstruct-7B-v0.1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-11T19:12:44Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-11T19:11:07Z |
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-7B-v0.1
## Model Details
### Model Description
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”)
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0][“generated_text”])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
## Bias, Risks, and Limitations
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
To be completed.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
Aedelon/Reinforce-CartPole1
|
Aedelon
| 2024-01-11T19:08:37Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T14:41:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
andrewatef/MyBloggerV0.8
|
andrewatef
| 2024-01-11T19:00:56Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2024-01-11T17:27:37Z |
---
library_name: peft
base_model: unsloth/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
armhebb/65995e622d50edfb3ead9268
|
armhebb
| 2024-01-11T18:59:45Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-11T16:55:01Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a photo in the style of The dataset has already been processed with this model.'
instance_prompt: a photo in the style of The dataset has already been processed with this model.
license: openrail++
---
# SDXL LoRA DreamBooth - armhebb/65995e622d50edfb3ead9268
<Gallery />
## Model description
### These are armhebb/65995e622d50edfb3ead9268 LoRA adaption weights.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`/kohl_s_sonoma__checkpoints.safetensors` here 💾](/armhebb/65995e622d50edfb3ead9268/blob/main//kohl_s_sonoma__checkpoints.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:/kohl_s_sonoma__checkpoints:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`/kohl_s_sonoma__checkpoints_emb.safetensors` here 💾](/armhebb/65995e622d50edfb3ead9268/blob/main//kohl_s_sonoma__checkpoints_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `/kohl_s_sonoma__checkpoints_emb` to your prompt. For example, `a photo in the style of The dataset has already been processed with this model.`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('armhebb/65995e622d50edfb3ead9268', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='armhebb/65995e622d50edfb3ead9268', filename='/kohl_s_sonoma__checkpoints_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a photo in the style of The dataset has already been processed with this model.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `Thedatasethasalreadybeenprocessedwiththismodel.` → use `<s0>` in your prompt
## Details
All [Files & versions](/armhebb/65995e622d50edfb3ead9268/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: None.
|
yaizaa/yaizaa
|
yaizaa
| 2024-01-11T18:52:19Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-01-11T18:52:17Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
drakrig/ppo-LunarLander-v2
|
drakrig
| 2024-01-11T18:51:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T20:25:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.09 +/- 20.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jysssacc/mt0-base_adalora_lr0.0005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T18:51:27Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T18:45:23Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_adalora_lr0.0005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_adalora_lr0.0005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5429 | 1.0 | 157 | 0.5466 |
| 0.1136 | 2.0 | 314 | 0.0242 |
| 0.0471 | 3.0 | 471 | 0.0056 |
| 0.0307 | 4.0 | 628 | 0.0025 |
| 0.0183 | 5.0 | 785 | 0.0006 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TheBloke/Nous-Capybara-limarpv3-34B-GPTQ
|
TheBloke
| 2024-01-11T18:44:27Z | 24 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Doctor-Shotgun/Nous-Capybara-limarpv3-34B",
"base_model:quantized:Doctor-Shotgun/Nous-Capybara-limarpv3-34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-11T15:36:21Z |
---
base_model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B
inference: false
license: other
model_creator: Doctor Shotgun
model_name: Nous Capybara Limarpv3 34B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Capybara Limarpv3 34B - GPTQ
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Nous Capybara Limarpv3 34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Doctor Shotgun's Nous Capybara Limarpv3 34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Nous-Capybara-limarpv3-34B-GPTQ`:
```shell
mkdir Nous-Capybara-limarpv3-34B-GPTQ
huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Nous-Capybara-limarpv3-34B-GPTQ
huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Nous-Capybara-limarpv3-34B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Capybara-limarpv3-34B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Nous-Capybara-limarpv3-34B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Doctor Shotgun's Nous Capybara Limarpv3 34B
No original model card was available.
|
stevhliu/vit-base-patch16-224-in21k-loha
|
stevhliu
| 2024-01-11T18:42:30Z | 15 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2024-01-11T16:48:00Z |
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-ds
|
nm-testing
| 2024-01-11T18:42:27Z | 2 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"conversational",
"arxiv:2301.00774",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-11T07:43:36Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
inference: false
model_type: llama
prompt_template: |
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
## TinyLlama 1.1B Chat 1.0 - DeepSparse
This repo contains model files for [TinyLlama 1.1B Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
model = TextGeneration(model_path="hf:nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-ds")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
Sure, here's a recipe for making banana bread:
Ingredients:
- 2 ripe bananas
- 1 cup all-purpose flour
- 1 cup bread flour
- 1 cup all-purpose flour
- 1 cup bread flour
- 1 cup melted coconut oil
- 1/2 cup granulated sugar
- 1/2 cup water
- 1/2 cup melted coconut oil
- 1/2 cup granulated sugar
- 1/2 cup melted coconut oil
- 1/2 cup granulated sugar
- 1/2 cup melted coconut oil
- 1/2 cup granulated sugar
- 1/2 cup melted coconut oil
- 1/2 cup granulated sugar
- 1/2 cup
"""
```
## Prompt template
```
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/TinyLlama-1.1B-Chat-v1.0 open_platypus --precision float16 --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
Feluda/Final_Fine_Tuned_Legal_Led
|
Feluda
| 2024-01-11T18:42:05Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"base_model:nsi319/legal-led-base-16384",
"base_model:finetune:nsi319/legal-led-base-16384",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-12-27T19:00:24Z |
---
license: mit
library_name: transformers
base_model: nsi319/legal-led-base-16384
model-index:
- name: results
results: []
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [nsi319/legal-led-base-16384](https://huggingface.co/nsi319/legal-led-base-16384) on the joelniklaus/legal_case_document_summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2 | 1.0 | 1924 | 2.8550 |
| 3.6193 | 2.0 | 3848 | 2.7593 |
| 2.7776 | 3.0 | 5772 | 2.7401 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
sekinat/ppo-CartPole-v1-wanb
|
sekinat
| 2024-01-11T18:41:50Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T18:41:42Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 252.80 +/- 99.53
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'default_name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sekinat/ppo-CartPole-v1-wanb'
'batch_size': 512
'minibatch_size': 128}
```
|
Bytes512/Waterbuck
|
Bytes512
| 2024-01-11T18:30:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:ChaiML/season_4_top_solution",
"base_model:merge:ChaiML/season_4_top_solution",
"base_model:Heralax/Augmental-13b-v1.50_B",
"base_model:merge:Heralax/Augmental-13b-v1.50_B",
"base_model:NeverSleep/Noromaid-13b-v0.3",
"base_model:merge:NeverSleep/Noromaid-13b-v0.3",
"base_model:TheBloke/Llama-2-13B-fp16",
"base_model:merge:TheBloke/Llama-2-13B-fp16",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T18:30:38Z |
---
base_model:
- Heralax/Augmental-13b-v1.50_B
- ChaiML/season_4_top_solution
- NeverSleep/Noromaid-13b-v0.3
- TheBloke/Llama-2-13B-fp16
- Fredithefish/RP_Base
tags:
- mergekit
- merge
---
# waterbuck
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
### Models Merged
The following models were included in the merge:
* [Heralax/Augmental-13b-v1.50_B](https://huggingface.co/Heralax/Augmental-13b-v1.50_B)
* [ChaiML/season_4_top_solution](https://huggingface.co/ChaiML/season_4_top_solution)
* [NeverSleep/Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3)
* [Fredithefish/RP_Base](https://huggingface.co/Fredithefish/RP_Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Heralax/Augmental-13b-v1.50_B
parameters:
density: 0.5
weight: 0.3
- model: Fredithefish/RP_Base
parameters:
density: 0.5
weight: 0.6
- model: NeverSleep/Noromaid-13b-v0.3
parameters:
density: 0.5
weight: 0.5
- model: ChaiML/season_4_top_solution
parameters:
density: 0.5
weight: 0.5
base_model: TheBloke/Llama-2-13B-fp16
merge_method: dare_ties
parameters:
normalize: 1.0
```
|
sessex/mm-tabi-boot-product
|
sessex
| 2024-01-11T18:30:13Z | 25 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-04T20:05:05Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: a photo of purple <s0><s1>
output:
url: image-0.png
- text: a photo of brown <s0><s1>
output:
url: image-1.png
- text: a photo of tan <s0><s1>
output:
url: image-2.png
- text: a photo of metallic <s0><s1>
output:
url: image-3.png
- text: a photo of tan <s0><s1>
output:
url: image-4.png
- text: a photo of grey <s0><s1>
output:
url: image-5.png
- text: a photo of white <s0><s1>
output:
url: image-6.png
- text: a photo of brown <s0><s1>
output:
url: image-7.png
- text: a photo of black <s0><s1>
output:
url: image-8.png
- text: a photo of white <s0><s1>
output:
url: image-9.png
- text: a photo of brown <s0><s1>
output:
url: image-10.png
- text: a photo of black <s0><s1>
output:
url: image-11.png
- text: a photo of light blue <s0><s1>
output:
url: image-12.png
- text: a photo of grey <s0><s1>
output:
url: image-13.png
- text: a photo of tan <s0><s1>
output:
url: image-14.png
- text: a photo of black <s0><s1>
output:
url: image-15.png
- text: a photo of white <s0><s1>
output:
url: image-16.png
- text: a photo of tan <s0><s1>
output:
url: image-17.png
- text: a photo of grey <s0><s1>
output:
url: image-18.png
- text: a photo of black <s0><s1>
output:
url: image-19.png
- text: a photo of metallic <s0><s1>
output:
url: image-20.png
- text: a photo of black <s0><s1>
output:
url: image-21.png
- text: a photo of black <s0><s1>
output:
url: image-22.png
- text: a photo of black <s0><s1>
output:
url: image-23.png
- text: a photo of gold cowboy <s0><s1>
output:
url: image-24.png
- text: a photo of tall tan <s0><s1>
output:
url: image-25.png
- text: a photo of tall white <s0><s1>
output:
url: image-26.png
- text: a photo of tall burgundy <s0><s1>
output:
url: image-27.png
- text: a photo of tall black <s0><s1>
output:
url: image-28.png
- text: a photo of tall black <s0><s1>
output:
url: image-29.png
- text: a photo of tall white patterned <s0><s1>
output:
url: image-30.png
- text: a photo of burgundy <s0><s1>
output:
url: image-31.png
- text: a photo of black <s0><s1>
output:
url: image-32.png
- text: a photo of burgundy <s0><s1>
output:
url: image-33.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - sessex/mm-tabi-boot-product
<Gallery />
## Model description
### These are sessex/mm-tabi-boot-product LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`mm-tabi-boot-product.safetensors` here 💾](/sessex/mm-tabi-boot-product/blob/main/mm-tabi-boot-product.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:mm-tabi-boot-product:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`mm-tabi-boot-product_emb.safetensors` here 💾](/sessex/mm-tabi-boot-product/blob/main/mm-tabi-boot-product_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `mm-tabi-boot-product_emb` to your prompt. For example, `a photo of mm-tabi-boot-product_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sessex/mm-tabi-boot-product', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='sessex/mm-tabi-boot-product', filename='mm-tabi-boot-product_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/sessex/mm-tabi-boot-product/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Makucas/Mistral-7B-Instruct-v0.2_01
|
Makucas
| 2024-01-11T18:20:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T17:35:17Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_01
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
itzzdeep/youtube-thumbnails-sdxl-lora-v2
|
itzzdeep
| 2024-01-11T18:18:34Z | 4 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-11T12:53:22Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'instance_prompt'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: instance_prompt
license: openrail++
---
# SDXL LoRA DreamBooth - itzzdeep/youtube-thumbnails-sdxl-lora-v2
<Gallery />
## Model description
### These are itzzdeep/youtube-thumbnails-sdxl-lora-v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`youtube-thumbnails-sdxl-lora-v2.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/blob/main/youtube-thumbnails-sdxl-lora-v2.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:youtube-thumbnails-sdxl-lora-v2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`youtube-thumbnails-sdxl-lora-v2_emb.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/blob/main/youtube-thumbnails-sdxl-lora-v2_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `youtube-thumbnails-sdxl-lora-v2_emb` to your prompt. For example, `instance_prompt`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('itzzdeep/youtube-thumbnails-sdxl-lora-v2', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='itzzdeep/youtube-thumbnails-sdxl-lora-v2', filename='youtube-thumbnails-sdxl-lora-v2_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('instance_prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
sridhar1111111111111111/MedQuad_Mistral_Med_Expert
|
sridhar1111111111111111
| 2024-01-11T18:17:36Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:adapter:stabilityai/stablelm-zephyr-3b",
"region:us"
] | null | 2024-01-11T18:15:53Z |
---
library_name: peft
base_model: stabilityai/stablelm-zephyr-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
LoneStriker/Open_Gpt4_8x7B_v0.2-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-11T18:11:21Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T18:02:58Z |
---
license: apache-2.0
tags:
- merge
- moe
---
Open_Gpt4_v0.2

This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-8x7b-v0.2 with MixtralOrochi8x7B being the Base model.
I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct,
Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊
This is the second iteration of this model, using better models in the merger to improve performance (hopefully).
Base model:
- https://huggingface.co/smelborp/MixtralOrochi8x7B
Merged models:
- https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
- https://huggingface.co/jondurbin/bagel-8x7b-v0.2
Instruct template: Alpaca
Merger config:
```
models:
- model: Mixtral-8x7B-Instruct-v0.1
parameters:
density: .5
weight: .7
- model: bagel-8x7b-v0.2
parameters:
density: .5
weight: 1
merge_method: ties
base_model: MixtralOrochi8x7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
jysssacc/bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T18:11:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-11T18:09:55Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.7814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6311 | 1.0 | 157 | 3.5108 |
| 4.3604 | 2.0 | 314 | 5.1723 |
| 6.3332 | 3.0 | 471 | 7.7423 |
| 11.1799 | 4.0 | 628 | 7.2969 |
| 7.1242 | 5.0 | 785 | 7.7814 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
RaviNaik/Phi2-Osst
|
RaviNaik
| 2024-01-11T18:08:19Z | 4 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-11T17:51:37Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.4
- Tokenizers 0.15.0
|
lockylocks/q-FrozenLake-v1-4x4-noSlippery
|
lockylocks
| 2024-01-11T18:02:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T18:02:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lockylocks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ludoviciarraga/checkpoint-638
|
ludoviciarraga
| 2024-01-11T17:58:53Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2024-01-10T11:59:48Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.6.2
|
ludoviciarraga/checkpoint-478
|
ludoviciarraga
| 2024-01-11T17:58:42Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2024-01-11T17:58:38Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.6.2
|
ludoviciarraga/checkpoint-319
|
ludoviciarraga
| 2024-01-11T17:58:37Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2024-01-11T17:58:28Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.6.2
|
cezeozue/pegasus-samsum
|
cezeozue
| 2024-01-11T17:52:22Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T17:33:11Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0507 | 0.54 | 500 | 1.8682 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ryusangwon/6363_Llama-2-13b-hf
|
ryusangwon
| 2024-01-11T17:50:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-01-11T17:50:46Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: 6363_Llama-2-13b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6363_Llama-2-13b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jysssacc/bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T17:47:02Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-11T17:45:50Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0978 | 1.0 | 157 | 3.3881 |
| 3.4201 | 2.0 | 314 | 3.2714 |
| 3.1531 | 3.0 | 471 | 3.2386 |
| 2.7945 | 4.0 | 628 | 3.3271 |
| 2.557 | 5.0 | 785 | 3.4255 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
blindsolitaire/Alpha-Prompt
|
blindsolitaire
| 2024-01-11T17:41:18Z | 0 | 0 | null |
[
"tensorboard",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T10:19:06Z |
---
license: apache-2.0
---
This is a 13-B LORA trained on https://huggingface.co/Gryphe/MythoMax-L2-13b to generate SD prompts from short descriptions.
It generate verbose prompts which can result in better images
This LORA is co-developed with https://huggingface.co/TwoAbove
## Examples:


## Usage:
Simplest way to test this LORA is to download it an use with the [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
We also suggest suggest using this character for better results. (credits: altoiddealer)
```
{{user}}: isometric coffee shop
{{char}}: isometric model of a coffee shop with tables and stools, 3d fantasy cute house, intricate dimmed gold brown color tone, inspired by Miao Fu, miniature action figure, listing image, inside a glass box, bakery, inspired by Hideyuki Kikuchi
{{user}}: Cthulhu cereal advertisement
{{char}}: product photography of box of Cthulhu cereal with fish taste, two sad displeased crying kids eating bowl of cereal, box of Cthulhoos depicting an angry Cthulhu on the box, whimsical, commercial, magazine advertisement
{{user}}: anthropomorphic owl bard
{{char}}: full portrait of an anthropomorphic white owl, bard, human hands with gloves and a purple magic glow, big wings in the back, peacock motif on one wing, serious face, dnd, character design, digital art, intricate, detailed environment, fantasy
{{user}}: professional photo of coffee beans
{{char}}: professional macro photography of coffee beans
{{user}}: illustration of girl riding a turtle
{{char}}: a little adventurer riding a giant turtle in the jungle, children book illustration, playful, in the style of Francoise Seignobosc
{{user}}: cinematic photo of Jim Carrey as The Joker
{{char}}: Jim Carrey dressed as The Joker, cinematic light, detailed face, clown makeup, smile, cinematic style
{{user}}: pixar grumpy old man
{{char}}: enraged old man, face red with rage, fury. Humble house sidewalk background, animation pixar, cartoon, portrait, unreal engine, natural lighting, adorable
{{user}}: hazy photo of man praying
{{char}}: man in prayer, factured face close-up, haze filter, fisheye, vanishing point, vignetting, caustics, portrait, Masterpiece, soft lighting, high detail, amazing quality, saturated, light in contrast, trending on artstation, sharp focus, studio photo, intricate details, highly detailed
{{user}}: simple watercolor painting of a dog
{{char}}: simple watercolor painting of dachshund dog, simple white background
{{user}}: glistening lioness in tar
{{char}}: A slimy gooey black latex lioness, coated in glossy, tar-like goo that drips of its body, its body smooth and shiny as it drips, outside, nature, tar fluid, intricite and detailed digital art, painterly style, visible brushstrokes, textured brushstrokes, character concept art by Logan Preshaw and Johan Messely
{{user}}: bubbles containing miniature worlds
{{char}}: bubbles containing miniature worlds, extreme closeup, macro photography, moon, moonlight, stars, by ron arad, craola, mystical, otherworldly, magic, photorealism, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed
{{user}}: light flowing through desert
{{char}}: a river of light flowing through the desert, fantasy, high contrast colors, epic detail, fantastic lighting, upscale
{{user}}: Deadpool reading newspaper
{{char}}: surreal, abstract, dark, fantasy, dichromatic, cinematic shot of Deadpool reading a newspaper in a classic diner, zoom lens, dynamic pose, action shot, cinematic lighting, highly detailed, intricate
{{user}}: octopus chef
{{char}}: a charming scene in an octopus' underwater kitchen, where the octopus, dressed in a chef's hat and apron, expertly uses its many arms to prepare a feast of seashell stew and kelp salad
{{user}}: watercolor portrait of "Hoggle"
{{char}}: watercolor portrait of "Hoggle from "Labyrinth" the movie, by Brian Froud and Jim Henson, mid-brown and green tones, loveable and a little strange and dark
{{user}}: happy dog
{{char}}: Golden Retriever with tongue sticking out, tail wagging
{{user}}: neon punk sunset
{{char}}: a wooden roundtable with wine and cheese in the foreground, a beautiful landscape in the background with grass hills and vineyards, cyberpunk aesthetic, neon punk style
{{user}}: modern real state listing
{{char}}: photo of a small cozy modern house in red woods on a mountain, solar panels, garage, driveway, great view, sunshine
{{user}}: skull surrounded by butterflies
{{char}}: a charming and hyper-realistic image of a skull, adorned with soft and cheerful flock of butterflies, hypnotic and uplifting atmosphere, masterfully rendered skull with intricate details, filled with fluttering and vibrant coloured butterflies, Rosanna Jones, Nick Brandt, highly detailed, natural outdoor environment, UHD resolution.
```
## Credits:
https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts
https://huggingface.co/datasets/MadVoyager/stable_diffusion_instructional_dataset
https://huggingface.co/datasets/lusstta/stable_diffusion_instructional_dataset
https://huggingface.co/datasets/poloclub/diffusiondb
|
gagan3012/MetaModel_moe_yix2
|
gagan3012
| 2024-01-11T17:39:05Z | 0 | 0 | null |
[
"moe",
"mergekit",
"merge",
"chinese",
"arabic",
"english",
"multilingual",
"german",
"french",
"jondurbin/bagel-dpo-34b-v0.2",
"jondurbin/nontoxic-bagel-34b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T17:39:02Z |
---
license: apache-2.0
tags:
- moe
- mergekit
- merge
- chinese
- arabic
- english
- multilingual
- german
- french
- jondurbin/bagel-dpo-34b-v0.2
- jondurbin/nontoxic-bagel-34b-v0.2
---
# MetaModel_moe_yix2
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [jondurbin/nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2)
## 🧩 Configuration
```yamlbase_model: jondurbin/bagel-dpo-34b-v0.2
dtype: bfloat16
experts:
- positive_prompts:
- chat
- assistant
- tell me
- explain
source_model: jondurbin/bagel-dpo-34b-v0.2
- positive_prompts:
- chat
- assistant
- tell me
- explain
source_model: jondurbin/nontoxic-bagel-34b-v0.2
gate_mode: hidden
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/MetaModel_moe_yix2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
lupex/ail
|
lupex
| 2024-01-11T17:34:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:h94/IP-Adapter-FaceID",
"base_model:adapter:h94/IP-Adapter-FaceID",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-01-11T17:31:03Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/logo.png
base_model: h94/IP-Adapter-FaceID
instance_prompt: ail
license: apache-2.0
---
# ail
<Gallery />
## Model description

## Trigger words
You should use `ail` to trigger the image generation.
## Download model
[Download](/lupex/ail/tree/main) them in the Files & versions tab.
|
smutuvi/whisper-small-sw-common-voice-ndizi-158-200epochs
|
smutuvi
| 2024-01-11T17:32:18Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:smutuvi/whisper-small-sw-common-voice",
"base_model:adapter:smutuvi/whisper-small-sw-common-voice",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T17:32:17Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: smutuvi/whisper-small-sw-common-voice
model-index:
- name: whisper-small-sw-common-voice-ndizi-158-200epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sw-common-voice-ndizi-158-200epochs
This model is a fine-tuned version of [smutuvi/whisper-small-sw-common-voice](https://huggingface.co/smutuvi/whisper-small-sw-common-voice) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 1.9360 |
| 1.7703 | 2.0 | 36 | 1.9238 |
| 1.7272 | 3.0 | 54 | 1.9020 |
| 1.7272 | 4.0 | 72 | 1.8734 |
| 1.6774 | 5.0 | 90 | 1.8500 |
| 1.6158 | 6.0 | 108 | 1.8275 |
| 1.593 | 7.0 | 126 | 1.8072 |
| 1.593 | 8.0 | 144 | 1.7844 |
| 1.5635 | 9.0 | 162 | 1.7650 |
| 1.4675 | 10.0 | 180 | 1.7491 |
| 1.4675 | 11.0 | 198 | 1.7345 |
| 1.5034 | 12.0 | 216 | 1.7205 |
| 1.5317 | 13.0 | 234 | 1.7083 |
| 1.4329 | 14.0 | 252 | 1.6991 |
| 1.4329 | 15.0 | 270 | 1.6891 |
| 1.38 | 16.0 | 288 | 1.6826 |
| 1.3865 | 17.0 | 306 | 1.6771 |
| 1.3865 | 18.0 | 324 | 1.6673 |
| 1.4132 | 19.0 | 342 | 1.6620 |
| 1.3528 | 20.0 | 360 | 1.6544 |
| 1.3223 | 21.0 | 378 | 1.6499 |
| 1.3223 | 22.0 | 396 | 1.6439 |
| 1.3042 | 23.0 | 414 | 1.6376 |
| 1.3851 | 24.0 | 432 | 1.6328 |
| 1.2761 | 25.0 | 450 | 1.6260 |
| 1.2761 | 26.0 | 468 | 1.6226 |
| 1.2968 | 27.0 | 486 | 1.6194 |
| 1.2563 | 28.0 | 504 | 1.6150 |
| 1.2563 | 29.0 | 522 | 1.6130 |
| 1.3355 | 30.0 | 540 | 1.6058 |
| 1.1684 | 31.0 | 558 | 1.6061 |
| 1.2346 | 32.0 | 576 | 1.6011 |
| 1.2346 | 33.0 | 594 | 1.5993 |
| 1.2771 | 34.0 | 612 | 1.5951 |
| 1.1984 | 35.0 | 630 | 1.5938 |
| 1.1984 | 36.0 | 648 | 1.5895 |
| 1.2108 | 37.0 | 666 | 1.5868 |
| 1.2113 | 38.0 | 684 | 1.5838 |
| 1.1797 | 39.0 | 702 | 1.5824 |
| 1.1797 | 40.0 | 720 | 1.5759 |
| 1.1079 | 41.0 | 738 | 1.5748 |
| 1.239 | 42.0 | 756 | 1.5756 |
| 1.239 | 43.0 | 774 | 1.5734 |
| 1.1676 | 44.0 | 792 | 1.5719 |
| 1.1334 | 45.0 | 810 | 1.5702 |
| 1.103 | 46.0 | 828 | 1.5634 |
| 1.103 | 47.0 | 846 | 1.5628 |
| 1.1201 | 48.0 | 864 | 1.5641 |
| 1.1103 | 49.0 | 882 | 1.5609 |
| 1.1624 | 50.0 | 900 | 1.5596 |
| 1.1624 | 51.0 | 918 | 1.5615 |
| 1.1292 | 52.0 | 936 | 1.5585 |
| 1.0579 | 53.0 | 954 | 1.5563 |
| 1.0579 | 54.0 | 972 | 1.5585 |
| 1.1412 | 55.0 | 990 | 1.5578 |
| 1.0949 | 56.0 | 1008 | 1.5545 |
| 1.0209 | 57.0 | 1026 | 1.5543 |
| 1.0209 | 58.0 | 1044 | 1.5567 |
| 1.0801 | 59.0 | 1062 | 1.5539 |
| 1.1108 | 60.0 | 1080 | 1.5555 |
| 1.1108 | 61.0 | 1098 | 1.5540 |
| 1.0092 | 62.0 | 1116 | 1.5528 |
| 1.0998 | 63.0 | 1134 | 1.5565 |
| 1.065 | 64.0 | 1152 | 1.5534 |
| 1.065 | 65.0 | 1170 | 1.5555 |
| 0.9916 | 66.0 | 1188 | 1.5548 |
| 1.0833 | 67.0 | 1206 | 1.5531 |
| 1.0833 | 68.0 | 1224 | 1.5530 |
| 0.9678 | 69.0 | 1242 | 1.5562 |
| 0.994 | 70.0 | 1260 | 1.5542 |
| 1.0578 | 71.0 | 1278 | 1.5545 |
| 1.0578 | 72.0 | 1296 | 1.5522 |
| 1.0325 | 73.0 | 1314 | 1.5535 |
| 0.9367 | 74.0 | 1332 | 1.5554 |
| 0.9867 | 75.0 | 1350 | 1.5534 |
| 0.9867 | 76.0 | 1368 | 1.5580 |
| 0.9753 | 77.0 | 1386 | 1.5568 |
| 0.9434 | 78.0 | 1404 | 1.5543 |
| 0.9434 | 79.0 | 1422 | 1.5538 |
| 1.0323 | 80.0 | 1440 | 1.5552 |
| 0.895 | 81.0 | 1458 | 1.5559 |
| 1.0086 | 82.0 | 1476 | 1.5570 |
| 1.0086 | 83.0 | 1494 | 1.5557 |
| 0.9415 | 84.0 | 1512 | 1.5578 |
| 0.9351 | 85.0 | 1530 | 1.5552 |
| 0.9351 | 86.0 | 1548 | 1.5559 |
| 0.9734 | 87.0 | 1566 | 1.5591 |
| 0.9218 | 88.0 | 1584 | 1.5602 |
| 0.9361 | 89.0 | 1602 | 1.5611 |
| 0.9361 | 90.0 | 1620 | 1.5608 |
| 0.9255 | 91.0 | 1638 | 1.5651 |
| 0.8865 | 92.0 | 1656 | 1.5620 |
| 0.8865 | 93.0 | 1674 | 1.5626 |
| 0.9224 | 94.0 | 1692 | 1.5625 |
| 0.9447 | 95.0 | 1710 | 1.5614 |
| 0.9318 | 96.0 | 1728 | 1.5591 |
| 0.9318 | 97.0 | 1746 | 1.5643 |
| 0.9399 | 98.0 | 1764 | 1.5648 |
| 0.8648 | 99.0 | 1782 | 1.5636 |
| 0.9009 | 100.0 | 1800 | 1.5696 |
| 0.9009 | 101.0 | 1818 | 1.5682 |
| 0.9023 | 102.0 | 1836 | 1.5678 |
| 0.8346 | 103.0 | 1854 | 1.5738 |
| 0.8346 | 104.0 | 1872 | 1.5652 |
| 0.8902 | 105.0 | 1890 | 1.5680 |
| 0.9123 | 106.0 | 1908 | 1.5709 |
| 0.8684 | 107.0 | 1926 | 1.5722 |
| 0.8684 | 108.0 | 1944 | 1.5717 |
| 0.8147 | 109.0 | 1962 | 1.5727 |
| 0.905 | 110.0 | 1980 | 1.5713 |
| 0.905 | 111.0 | 1998 | 1.5752 |
| 0.9051 | 112.0 | 2016 | 1.5753 |
| 0.7903 | 113.0 | 2034 | 1.5753 |
| 0.8993 | 114.0 | 2052 | 1.5771 |
| 0.8993 | 115.0 | 2070 | 1.5771 |
| 0.8289 | 116.0 | 2088 | 1.5804 |
| 0.8691 | 117.0 | 2106 | 1.5787 |
| 0.8691 | 118.0 | 2124 | 1.5773 |
| 0.8526 | 119.0 | 2142 | 1.5785 |
| 0.8238 | 120.0 | 2160 | 1.5804 |
| 0.8299 | 121.0 | 2178 | 1.5828 |
| 0.8299 | 122.0 | 2196 | 1.5851 |
| 0.9001 | 123.0 | 2214 | 1.5810 |
| 0.7701 | 124.0 | 2232 | 1.5843 |
| 0.8467 | 125.0 | 2250 | 1.5810 |
| 0.8467 | 126.0 | 2268 | 1.5834 |
| 0.7867 | 127.0 | 2286 | 1.5882 |
| 0.8984 | 128.0 | 2304 | 1.5898 |
| 0.8984 | 129.0 | 2322 | 1.5899 |
| 0.7879 | 130.0 | 2340 | 1.5878 |
| 0.833 | 131.0 | 2358 | 1.5934 |
| 0.8257 | 132.0 | 2376 | 1.5930 |
| 0.8257 | 133.0 | 2394 | 1.5927 |
| 0.8139 | 134.0 | 2412 | 1.5919 |
| 0.8014 | 135.0 | 2430 | 1.5940 |
| 0.8014 | 136.0 | 2448 | 1.5953 |
| 0.8397 | 137.0 | 2466 | 1.5967 |
| 0.8293 | 138.0 | 2484 | 1.5941 |
| 0.7821 | 139.0 | 2502 | 1.6005 |
| 0.7821 | 140.0 | 2520 | 1.6016 |
| 0.7746 | 141.0 | 2538 | 1.5951 |
| 0.8059 | 142.0 | 2556 | 1.5967 |
| 0.8059 | 143.0 | 2574 | 1.5974 |
| 0.7862 | 144.0 | 2592 | 1.6043 |
| 0.8253 | 145.0 | 2610 | 1.6015 |
| 0.7718 | 146.0 | 2628 | 1.6018 |
| 0.7718 | 147.0 | 2646 | 1.6005 |
| 0.7898 | 148.0 | 2664 | 1.6014 |
| 0.813 | 149.0 | 2682 | 1.5998 |
| 0.792 | 150.0 | 2700 | 1.6023 |
| 0.792 | 151.0 | 2718 | 1.6019 |
| 0.8097 | 152.0 | 2736 | 1.6022 |
| 0.7703 | 153.0 | 2754 | 1.6063 |
| 0.7703 | 154.0 | 2772 | 1.6023 |
| 0.7731 | 155.0 | 2790 | 1.6082 |
| 0.789 | 156.0 | 2808 | 1.6057 |
| 0.7733 | 157.0 | 2826 | 1.6040 |
| 0.7733 | 158.0 | 2844 | 1.6003 |
| 0.7945 | 159.0 | 2862 | 1.6086 |
| 0.7177 | 160.0 | 2880 | 1.6056 |
| 0.7177 | 161.0 | 2898 | 1.6068 |
| 0.7887 | 162.0 | 2916 | 1.6069 |
| 0.7331 | 163.0 | 2934 | 1.6074 |
| 0.8337 | 164.0 | 2952 | 1.6048 |
| 0.8337 | 165.0 | 2970 | 1.6052 |
| 0.7903 | 166.0 | 2988 | 1.6082 |
| 0.7811 | 167.0 | 3006 | 1.6050 |
| 0.7811 | 168.0 | 3024 | 1.6128 |
| 0.7264 | 169.0 | 3042 | 1.6079 |
| 0.769 | 170.0 | 3060 | 1.6052 |
| 0.7689 | 171.0 | 3078 | 1.6091 |
| 0.7689 | 172.0 | 3096 | 1.6099 |
| 0.7977 | 173.0 | 3114 | 1.6084 |
| 0.7253 | 174.0 | 3132 | 1.6123 |
| 0.7912 | 175.0 | 3150 | 1.6114 |
| 0.7912 | 176.0 | 3168 | 1.6092 |
| 0.748 | 177.0 | 3186 | 1.6146 |
| 0.7707 | 178.0 | 3204 | 1.6138 |
| 0.7707 | 179.0 | 3222 | 1.6135 |
| 0.7506 | 180.0 | 3240 | 1.6119 |
| 0.7549 | 181.0 | 3258 | 1.6137 |
| 0.7793 | 182.0 | 3276 | 1.6084 |
| 0.7793 | 183.0 | 3294 | 1.6095 |
| 0.7709 | 184.0 | 3312 | 1.6115 |
| 0.7416 | 185.0 | 3330 | 1.6108 |
| 0.7416 | 186.0 | 3348 | 1.6140 |
| 0.7727 | 187.0 | 3366 | 1.6131 |
| 0.7674 | 188.0 | 3384 | 1.6146 |
| 0.731 | 189.0 | 3402 | 1.6146 |
| 0.731 | 190.0 | 3420 | 1.6133 |
| 0.8103 | 191.0 | 3438 | 1.6113 |
| 0.7121 | 192.0 | 3456 | 1.6127 |
| 0.7121 | 193.0 | 3474 | 1.6130 |
| 0.738 | 194.0 | 3492 | 1.6096 |
| 0.7967 | 195.0 | 3510 | 1.6137 |
| 0.7543 | 196.0 | 3528 | 1.6149 |
| 0.7543 | 197.0 | 3546 | 1.6149 |
| 0.7389 | 198.0 | 3564 | 1.6128 |
| 0.7658 | 199.0 | 3582 | 1.6142 |
| 0.7161 | 200.0 | 3600 | 1.6105 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
vicgalle/phi-2-contextual
|
vicgalle
| 2024-01-11T17:26:01Z | 4 | 5 |
peft
|
[
"peft",
"safetensors",
"dpo",
"rag",
"dataset:jondurbin/contextual-dpo-v0.1",
"base_model:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:adapter:cognitivecomputations/dolphin-2_6-phi-2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T17:02:00Z |
---
library_name: peft
base_model: cognitivecomputations/dolphin-2_6-phi-2
license: apache-2.0
datasets:
- jondurbin/contextual-dpo-v0.1
tags:
- dpo
- rag
---
# Model Card for Phi-2-Contextual
A lightweight model (2.8B) with enhanced RAG capabilities, with lower risk of hallucination.
This is a DPO fine-tune of the Phi-2 architecture (in particular, dolphin-2_6-phi-2) over the dataset https://huggingface.co/datasets/jondurbin/contextual-dpo-v0.1.

## Usage
Load the model as
```
model = AutoModelForCausalLM.from_pretrained(
"vicgalle/phi-2-contextual",
torch_dtype="auto",
load_in_4bit=True,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"cognitivecomputations/dolphin-2_6-phi-2",
trust_remote_code=True
)
```
and use the following prompt template.
## Prompt format
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
with the following definitions:
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
## Examples
### Reference-based Q&A
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://en.wikipedia.org/wiki/Rottnest_Island
ENDCONTEXT
Rottnest Island (Nyungar: Wadjemup), often colloquially referred to as "Rotto", is a 19-square-kilometre (7.3 sq mi) island off the coast of Western Australia, located 18 kilometres (11 mi) west of Fremantle. A sandy, low-lying island formed on a base of aeolianite limestone, Rottnest is an A-class reserve, the highest level of protection afforded to public land.
Together with Garden Island, Rottnest Island is a remnant of Pleistocene dune ridges.[7] Along with several other islands, Rottnest became separated from the mainland around 7,000 years ago, when sea levels rose; the traditional Noongar name for the island is Wadjemup, which means "place across the water where the spirits are".[8] Human artefacts have been found on the island dating back at least 30,000 years, but visitation and habitation of the island by the Noongar people appears to have ceased following its separation from the mainland.
The island was first documented by Willem de Vlamingh in 1696, who called it 't Eylandt 't Rottenest ("Rats' Nest Island") after the quokka population. Following establishment of the Swan River Colony (now Perth) in 1829, the island was initially used by British settlers for agricultural purposes, and a permanent settlement was built in Thomson Bay. From 1838 to 1931, Rottnest Island was also used as a prison and forced labour camp for over 3,600 Aboriginal people, who were subjected to extremely harsh conditions on the island.[9][10] Other historical uses of the island include as a military site, and for internment camps housing enemy aliens. Many of the island's buildings date from the colonial period, often made from locally quarried limestone, and are now used as accommodation for holidays.
ENDINPUT
BEGININSTRUCTION
In which years was the island used as a jail?
ENDINSTRUCTION
```
produces
```
The island was used as a jail from 1838 to 1931.
```
### Structured output generation
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
The rock parrot (Neophema petrophila) is a species of grass parrot native to Australia. Described by John Gould in 1841, it is a small parrot 22 to 24 cm (8+3⁄4 to 9+1⁄2 in) long and weighing 50 to 60 g (1+3⁄4 to 2 oz) with predominantly olive-brown upperparts and more yellowish underparts. Its head is olive with light blue forecheeks and lores, and a dark blue frontal band line across the crown with lighter blue above and below. The sexes are similar in appearance, although the female tends to have a duller frontal band and less blue on the face. The preferred habitats are rocky islands and coastal dune areas from Lake Alexandrina in southeastern South Australia westwards across coastal South and Western Australia to Shark Bay.
ENDINPUT
BEGININSTRUCTION
Generate a list with all the mentioned locations
ENDINSTRUCTION
```
produces
```
The mentioned locations in the provided text are:
1. Lake Alexandrina in southeastern South Australia
2. Coastal dune areas extending from Lake Alexandrina to Shark Bay in Western Australia.
```
### Multiple contexts
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://wikipedia.com/Lego
ENDCONTEXT
Since 1963, Lego pieces have been manufactured from acrylonitrile butadiene styrene (ABS).[19][39] As of September 2008, Lego engineers use the NX CAD/CAM/CAE PLM software suite to model the elements. The software allows the parts to be optimised by way of mould flow and stress analysis. Prototype moulds are sometimes built before the design is committed to mass production. The ABS plastic is heated to 232 °C (450 °F) until it reaches a dough-like consistency. It is then injected into the moulds using forces of between 25 and 150 tonnes and takes approximately 15 seconds to cool. The moulds are permitted a tolerance of up to twenty micrometres to ensure the bricks remain connected.[42] Human inspectors check the output of the moulds to eliminate significant variations in colour or thickness. According to the Lego Group, about eighteen bricks out of every million fail to meet the standard required.[46]
Lego factories recycle all but about 1 percent of their plastic waste from the manufacturing process. If the plastic cannot be re-used in Lego bricks, it is processed and sold on to industries that can make use of it.[47][48] Lego, in 2018, set a self-imposed 2030 deadline to find a more eco-friendly alternative to the ABS plastic.[49]
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2021-01-02
url: https://wikipedia.com/Rottnest
ENDCONTEXT
Rottnest Island (Nyungar: Wadjemup), often colloquially referred to as "Rotto", is a 19-square-kilometre (7.3 sq mi) island off the coast of Western Australia, located 18 kilometres (11 mi) west of Fremantle. A sandy, low-lying island formed on a base of aeolianite limestone, Rottnest is an A-class reserve, the highest level of protection afforded to public land.
Together with Garden Island, Rottnest Island is a remnant of Pleistocene dune ridges.[7] Along with several other islands, Rottnest became separated from the mainland around 7,000 years ago, when sea levels rose; the traditional Noongar name for the island is Wadjemup, which means "place across the water where the spirits are".[8] Human artefacts have been found on the island dating back at least 30,000 years, but visitation and habitation of the island by the Noongar people appears to have ceased following its separation from the mainland.
The island was first documented by Willem de Vlamingh in 1696, who called it 't Eylandt 't Rottenest ("Rats' Nest Island") after the quokka population. Following establishment of the Swan River Colony (now Perth) in 1829, the island was initially used by British settlers for agricultural purposes, and a permanent settlement was built in Thomson Bay. From 1838 to 1931, Rottnest Island was also used as a prison and forced labour camp for over 3,600 Aboriginal people, who were subjected to extremely harsh conditions on the island.[9][10] Other historical uses of the island include as a military site, and for internment camps housing enemy aliens. Many of the island's buildings date from the colonial period, often made from locally quarried limestone, and are now used as accommodation for holidays.
ENDINPUT
BEGININSTRUCTION
What is threshold to ensure the pieces match?
ENDINSTRUCTION
```
produces
```
The threshold to ensure the pieces match refers to the tolerance of up to twenty micrometres in the Lego brick manufacturing process. According to the provided context, this tolerance is set by the Lego Group and it helps to ensure that the bricks remain connected.
Reference(s):
date: 2021-01-01
url: https://wikipedia.com/Lego
```
### Framework versions
- PEFT 0.7.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.