modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-28 18:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 525
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-28 18:27:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Arya6967/SAMMed-2d
|
Arya6967
| 2023-11-23T01:12:02Z | 0 | 0 | null |
[
"arxiv:2308.16184",
"region:us"
] | null | 2023-11-23T01:06:23Z |
# SAM-Med2D \[[Paper](https://arxiv.org/abs/2308.16184)]
[](https://openxlab.org.cn/apps/detail/litianbin/SAM-Med2D)
</a>
<a src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red" href="https://arxiv.org/abs/2308.16184"> <img src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red">
<a src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat" href="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/SAM-Med2D_wechat_group.jpeg"> <img src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat">
</a>
<a target="_blank" href="https://colab.research.google.com/github/openmedlab/SAM-Med2D/blob/main/predictor_example.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
<!-- ## Description -->
## 🌤️ Highlights
- 🏆 Collected and curated the largest medical image segmentation dataset (4.6M images and 19.7M masks) to date for training models.
- 🏆 The most comprehensive fine-tuning based on Segment Anything Model (SAM).
- 🏆 Comprehensive evaluation of SAM-Med2D on large-scale datasets.
## 🔥 Updates
- (2023.09.02) Test code release
- (2023.08.31) Pre-trained model release
- (2023.08.31) Paper release
- (2023.08.26) Online Demo release
## 👉 Dataset
SAM-Med2D is trained and tested on a dataset that includes **4.6M images** and **19.7M masks**. This dataset covers 10 medical data modalities, 4 anatomical structures + lesions, and 31 major human organs. To our knowledge, this is currently the largest and most diverse medical image segmentation dataset in terms of quantity and coverage of categories.
<p align="center"><img width="800" alt="image" src="https://github.com/openmedlab/SAM-Med2D/blob/main/assets/dataset.png"></p>
## 👉 Framework
The pipeline of SAM-Med2D. We freeze the image encoder and incorporate learnable adapter layers in each Transformer block to acquire domain-specific knowledge in the medical field. We fine-tune the prompt encoder using point, Bbox, and mask information, while updating the parameters of the mask decoder through interactive training.
<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/framwork.png"></p>
## 👉 Results
<table>
<caption align="center">Quantitative comparison of different methods on the test set: </caption>
<thead>
<tr>
<th>Model</th>
<th>Resolution</th>
<th>Bbox (%)</th>
<th>1 pt (%)</th>
<th>3 pts (%)</th>
<th>5 pts (%)</th>
<th>FPS</th>
<th>Checkpoint</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">SAM</td>
<td align="center">$256\times256$</td>
<td align="center">61.63</td>
<td align="center">18.94</td>
<td align="center">28.28</td>
<td align="center">37.47</td>
<td align="center">51</td>
<td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td>
</tr>
<tr>
<td align="center">SAM</td>
<td align="center">$1024\times1024$</td>
<td align="center">74.49</td>
<td align="center">36.88</td>
<td align="center">42.00</td>
<td align="center">47.57</td>
<td align="center">8</td>
<td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td>
</tr>
<tr>
<td align="center">FT-SAM</td>
<td align="center">$256\times256$</td>
<td align="center">73.56</td>
<td align="center">60.11</td>
<td align="center">70.95</td>
<td align="center">75.51</td>
<td align="center">51</td>
<td align="center"><a href="https://drive.google.com/file/d/1J4qQt9MZZYdv1eoxMTJ4FL8Fz65iUFM8/view?usp=drive_link">FT-SAM</a></td>
</tr>
<tr>
<td align="center">SAM-Med2D</td>
<td align="center">$256\times256$</td>
<td align="center">79.30</td>
<td align="center">70.01</td>
<td align="center">76.35</td>
<td align="center">78.68</td>
<td align="center">35</td>
<td align="center"><a href="https://drive.google.com/file/d/1ARiB5RkSsWmAB_8mqWnwDF8ZKTtFwsjl/view?usp=drive_link">SAM-Med2D</a></td>
</tr>
</tbody>
</table>
<table>
<caption align="center">Generalization validation on 9 MICCAI2023 datasets, where "*" denotes that we drop adapter layer of SAM-Med2D in test phase: </caption>
<thead>
<tr>
<th rowspan="2">Datasets</th>
<th colspan="3">Bbox prompt (%)</th>
<th colspan="3">1 point prompt (%)</th>
</tr>
<tr>
<th>SAM</th>
<th>SAM-Med2D</th>
<th>SAM-Med2D*</th>
<th>SAM</th>
<th>SAM-Med2D</th>
<th>SAM-Med2D*</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><a href="https://www.synapse.org/#!Synapse:syn51236108/wiki/621615">CrossMoDA23</a></td>
<td align="center">78.98</td>
<td align="center">70.51</td>
<td align="center">84.62</td>
<td align="center">18.49</td>
<td align="center">46.08</td>
<td align="center">73.98</td>
</tr>
<tr>
<td align="center"><a href="https://kits-challenge.org/kits23/">KiTS23</a></td>
<td align="center">84.80</td>
<td align="center">76.32</td>
<td align="center">87.93</td>
<td align="center">38.93</td>
<td align="center">48.81</td>
<td align="center">79.87</td>
</tr>
<tr>
<td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/12239#learn_the_details">FLARE23</a></td>
<td align="center">86.11</td>
<td align="center">83.51</td>
<td align="center">90.95</td>
<td align="center">51.05</td>
<td align="center">62.86</td>
<td align="center">85.10</td>
</tr>
<tr>
<td align="center"><a href="https://atlas-challenge.u-bourgogne.fr/">ATLAS2023</a></td>
<td align="center">82.98</td>
<td align="center">73.70</td>
<td align="center">86.56</td>
<td align="center">46.89</td>
<td align="center">34.72</td>
<td align="center">70.42</td>
</tr>
<tr>
<td align="center"><a href="https://multicenteraorta.grand-challenge.org/">SEG2023</a></td>
<td align="center">75.98</td>
<td align="center">68.02</td>
<td align="center">84.31</td>
<td align="center">11.75</td>
<td align="center">48.05</td>
<td align="center">69.85</td>
</tr>
<tr>
<td align="center"><a href="https://lnq2023.grand-challenge.org/lnq2023/">LNQ2023</a></td>
<td align="center">72.31</td>
<td align="center">63.84</td>
<td align="center">81.33</td>
<td align="center">3.81</td>
<td align="center">44.81</td>
<td align="center">59.84</td>
</tr>
<tr>
<td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/9804">CAS2023</a></td>
<td align="center">52.34</td>
<td align="center">46.11</td>
<td align="center">60.38</td>
<td align="center">0.45</td>
<td align="center">28.79</td>
<td align="center">15.19</td>
</tr>
<tr>
<td align="center"><a href="https://tdsc-abus2023.grand-challenge.org/Dataset/">TDSC-ABUS2023</a></td>
<td align="center">71.66</td>
<td align="center">64.65</td>
<td align="center">76.65</td>
<td align="center">12.11</td>
<td align="center">35.99</td>
<td align="center">61.84</td>
</tr>
<tr>
<td align="center"><a href="https://toothfairy.grand-challenge.org/toothfairy/">ToothFairy2023</a></td>
<td align="center">65.86</td>
<td align="center">57.45</td>
<td align="center">75.29</td>
<td align="center">1.01</td>
<td align="center">32.12</td>
<td align="center">47.32</td>
</tr>
<tr>
<td align="center">Weighted sum</td>
<td align="center">85.35</td>
<td align="center">81.93</td>
<td align="center">90.12</td>
<td align="center">48.08</td>
<td align="center">60.31</td>
<td align="center">83.41</td>
</tr>
</tbody>
</table>
## 👉 Visualization
<p align="center"><img width="800" alt="image" src="https://github.com/openmedlab/SAM-Med2D/blob/main/assets/visualization.png"></p>
## 👉 Test
Prepare your own dataset and refer to the samples in `SAM-Med2D/data_demo` to replace them according to your specific scenario. You need to generate the "label2image_test.json" file before running "test.py"
```bash
cd ./SAM-Med2d
python test.py
```
- work_dir: Specifies the working directory for the testing process. Default value is "workdir".
- batch_size: 1.
- image_size: Default value is 256.
- boxes_prompt: Use Bbox prompt to get segmentation results.
- point_num: Specifies the number of points. Default value is 1.
- iter_point: Specifies the number of iterations for point prompts.
- sam_checkpoint: Load sam or sammed checkpoint.
- encoder_adapter: Set to True if using SAM-Med2D's pretrained weights.
- save_pred: Whether to save the prediction results.
- prompt_path: Is there a fixed Prompt file? If not, the value is None, and it will be automatically generated in the latest prediction.
## 🚀 Try SAM-Med2D
- 🏆 **Gradio Online:** Online Demo can be found on [OpenXLab](https://openxlab.org.cn/apps/detail/litianbin/SAM-Med2D).
- 🏆 **Notebook Demo:** You can use [predictor_example.ipynb](https://github.com/openmedlab/SAM-Med2D/blob/main/predictor_example.ipynb) to run it locally to view the prediction results generated by different prompts.
- 🏆 **Gradio Local:** You can deploy [app.ipynb](https://github.com/openmedlab/SAM-Med2D/blob/main/app.ipynb) locally and upload test cases.
- **Notes:** Welcome to feedback [good case👍](https://github.com/OpenGVLab/SAM-Med2D/issues/2) and [bad case👎](https://github.com/OpenGVLab/SAM-Med2D/issues/1) in issue.
## 🗓️ Ongoing
- [ ] Train code release
- [x] Test code release
- [x] Pre-trained model release
- [x] Paper release
- [x] Online Demo release
## 🎫 License
This project is released under the [Apache 2.0 license](LICENSE).
## 💬 Discussion Group
If you have any inquiries regarding SAM-Med2D, you are welcome to join our WeChat group discussion by adding the contact below:
<p align="center"><img width="300" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/SAM-Med2D_wechat_group.jpeg"></p>
## 🤝 Acknowledgement
- We thank all medical workers and dataset owners for making public datasets available to the community.
- Thanks to the open-source of the following projects: [Segment Anything](https://github.com/facebookresearch/segment-anything)  
## 👋 Hiring & Global Collaboration
- **Hiring:** We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
- **Global Collaboration:** We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
- **Contact:** Junjun He(hejunjun@pjlab.org.cn), Jin Ye(yejin@pjlab.org.cn), and Tianbin Li (litianbin@pjlab.org.cn).
## Reference
```
@misc{cheng2023sammed2d,
title={SAM-Med2D},
author={Junlong Cheng and Jin Ye and Zhongying Deng and Jianpin Chen and Tianbin Li and Haoyu Wang and Yanzhou Su and
Ziyan Huang and Jilong Chen and Lei Jiangand Hui Sun and Junjun He and Shaoting Zhang and Min Zhu and Yu Qiao},
year={2023},
eprint={2308.16184},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
ddh0/rocket-3B-GGUF-fp16
|
ddh0
| 2023-11-23T01:09:12Z | 1 | 0 | null |
[
"gguf",
"text-generation",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T22:34:54Z |
---
license: cc-by-sa-4.0
pipeline_tag: text-generation
---
This is pansophic's [rocket-3B](https://huggingface.co/pansophic/rocket-3B), converted to GGUF. No other changes were made.
Two files are avaliable here:
- rocket-3B-**fp16**.gguf: the original model converted to GGUF without quantization
- rocket-3B-**q8_0-LOT**.gguf: the original model converted to GGUF with q8_0 quantization using the `--leave-output-tensor` command-line option
From llama.cpp/quantize --help:
```
--leave-output-tensor: Will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing
```
The model was converted using `convert-hf-to-gguf.py` from Georgi Gerganov's llama.cpp repo, commit `#8e672ef`.
All credit belongs to [pansophic](https://huggingface.co/pansophic) for training and releasing this model. Thank you!
|
chargoddard/llama2-22b
|
chargoddard
| 2023-11-23T01:03:37Z | 1,454 | 46 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-22T00:07:05Z |
---
model_type: llama
pipeline_tag: text-generation
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
tags:
- llama
---
This is [Llama 2 13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) with some additional attention heads from original-flavor Llama 33b frankensteined on.
Fine-tuned on ~10M tokens from RedPajama to settle in the transplants a little.
Not intended for use as-is - this model is meant to serve as a base for further tuning, hopefully with a greater capacity for learning than 13b.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.85 |
| ARC (25-shot) | 58.53 |
| HellaSwag (10-shot) | 82.55 |
| MMLU (5-shot) | 54.68 |
| TruthfulQA (0-shot) | 39.84 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 9.93 |
| DROP (3-shot) | 6.08 |
|
chargoddard/llama2-22b-blocktriangular
|
chargoddard
| 2023-11-23T01:03:34Z | 1,501 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T07:11:44Z |
---
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
tags:
- llama2
- llama
---
Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens.
Again, not intended for direct use - meant as a base for further tuning and merging.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b-blocktriangular)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.86 |
| ARC (25-shot) | 58.28 |
| HellaSwag (10-shot) | 82.69 |
| MMLU (5-shot) | 54.53 |
| TruthfulQA (0-shot) | 39.23 |
| Winogrande (5-shot) | 75.93 |
| GSM8K (5-shot) | 11.22 |
| DROP (3-shot) | 6.17 |
|
chargoddard/ypotryll-22b-epoch2-qlora
|
chargoddard
| 2023-11-23T01:02:40Z | 5 | 0 |
peft
|
[
"peft",
"llama",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:openai/summarize_from_feedback",
"dataset:ehartford/dolphin",
"base_model:chargoddard/llama2-22b-blocktriangular",
"base_model:adapter:chargoddard/llama2-22b-blocktriangular",
"region:us"
] | null | 2023-08-14T18:44:14Z |
---
library_name: peft
tags:
- llama
datasets:
- jondurbin/airoboros-gpt4-1.4.1
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- openai/summarize_from_feedback
- ehartford/dolphin
base_model: chargoddard/llama2-22b-blocktriangular
---
[Ypotryll-22b](https://huggingface.co/chargoddard/ypotryll-22b-qlora), trained for an additional epoch. Uses the following prompt format:
```
***System:You are a helpful assistant, who always gives a response to any request. ***Query:Here is a riddle: 5 sisters are busy. Ann is reading, Rose is cooking, Lorraine is playing chess and Mary is doing laundry. What is the fifth sister doing? ***Response:The fifth sister is sleeping. ***Query:Well, you tried. ***Response:I did my best!
```
A little bit dumb, but good for creative scenarios.
Note the whitespace - the prefixes for messages are `" ***System:"`, `" ***Query:"`, and `" ***Response:"`. This is important as `"***"` and `" ***"` are two entirely different tokens.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__ypotryll-22b-epoch2-qlora)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 51.68 |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 80.66 |
| MMLU (5-shot) | 54.52 |
| TruthfulQA (0-shot) | 40.42 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 5.38 |
| DROP (3-shot) | 45.24 |
|
chargoddard/platypus-2-22b-relora
|
chargoddard
| 2023-11-23T01:02:06Z | 1,416 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:chargoddard/Open-Platypus-Chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-17T04:22:40Z |
---
datasets:
- chargoddard/Open-Platypus-Chat
language:
- en
tags:
- llama
---
Experimental ReLoRA-trained model using the OpenPlatypus dataset. Ran for one epoch, with three lora restarts.
Not recommended for use yet. Mostly tossing this up for testing.
Base model was [llama2-22b-blocktriangular](https://huggingface.co/chargoddard/llama2-22b-blocktriangular).
Relevant training parameters:
```
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.001
lora_target_linear: true
relora_steps: 150
relora_warmup_steps: 10
gradient_accumulation_steps: 2
micro_batch_size: 3
```
Uses the same prompt format as [Ypotryll-22b](https://huggingface.co/chargoddard/ypotryll-22b-epoch2-qlora).
Prefix messages with `" ***System:"`, `" ***Query:"`, or `" ***Response:"`, paying attention to whitespace.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.21 |
| ARC (25-shot) | 57.68 |
| HellaSwag (10-shot) | 82.44 |
| MMLU (5-shot) | 55.33 |
| TruthfulQA (0-shot) | 43.61 |
| Winogrande (5-shot) | 77.35 |
| GSM8K (5-shot) | 6.6 |
| DROP (3-shot) | 42.46 |
|
chargoddard/MelangeA-70b
|
chargoddard
| 2023-11-23T01:00:52Z | 1,412 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-22T23:42:37Z |
Experimental merge. Details to come if successful.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__MelangeA-70b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 55.92 |
| ARC (25-shot) | 71.25 |
| HellaSwag (10-shot) | 87.3 |
| MMLU (5-shot) | 70.56 |
| TruthfulQA (0-shot) | 60.61 |
| Winogrande (5-shot) | 81.53 |
| GSM8K (5-shot) | 5.69 |
| DROP (3-shot) | 14.53 |
|
chargoddard/MelangeC-70b
|
chargoddard
| 2023-11-23T01:00:49Z | 1,416 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-23T02:14:19Z |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__MelangeC-70b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 61.22 |
| ARC (25-shot) | 71.67 |
| HellaSwag (10-shot) | 87.6 |
| MMLU (5-shot) | 70.37 |
| TruthfulQA (0-shot) | 58.13 |
| Winogrande (5-shot) | 83.98 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 56.81 |
|
typeof/openchat_3.5-sharded
|
typeof
| 2023-11-23T00:54:52Z | 15 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"openchat",
"C-RLFT",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/OpenOrca",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"dataset:tiedong/goat",
"dataset:glaiveai/glaive-code-assistant",
"dataset:meta-math/MetaMathQA",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T01:20:20Z |
---
license: apache-2.0
tags:
- openchat
- mistral
- C-RLFT
datasets:
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
- tiedong/goat
- glaiveai/glaive-code-assistant
- meta-math/MetaMathQA
library_name: transformers
pipeline_tag: text-generation
---
## This is the sharded version of https://huggingface.co/openchat/openchat_3.5
### Most recent commit [7e65595](https://huggingface.co/openchat/openchat_3.5/commit/7e65595159eacfe6895452858e4b0ca4059ab079)
# OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
</div>
|
typeof/morph-prover-v0-7b-sharded
|
typeof
| 2023-11-23T00:53:41Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"math",
"lean",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T23:49:39Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- math
- lean
---
# **morph-prover-v0-7b**
## This is the sharded version of https://huggingface.co/morph-labs/morph-prover-v0-7b
### Most recent commit [b6e98c1](https://huggingface.co/morph-labs/morph-prover-v0-7b/commit/b6e98c176ad3104acc063d69ab565c5a6f62c254)
<!--
b6e98c176ad3104acc063d69ab565c5a6f62c254
-->

## Table of Contents
1. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#model-summary" target="_blank">Model Summary</a>**
2. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#blog-post" target="_blank">Blog Post</a>**
3. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#training-format" target="_blank">Training Format</a>**
4. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#sign-up-for-hosted-use" target="_blank">Sign Up for Hosted Use</a>**
5. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#citation" target="_blank">Citation</a>**
6. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b#ethical-considerations-and-limitations" target="_blank">Ethical Considerations & Limitations</a>**
# **Model Summary**
- **Developed by:** **<a href="https://www.morph.so" target="_blank">Morph Labs</a>**
- **Language(s) (NLP):** English.
- **License:** **<a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank">Apache 2.0</a>**
Morph Prover v0 7B, the first open-source model trained as a conversational assistant for Lean users. This model was trained in collaboration with **<a href="https://nousresearch.com/" target="_blank">Nous Research</a>** and the **<a href="https://cs.stanford.edu/~sanmi/" target="_blank">Safe and Trustworthy AI Research (STAIR) group at Stanford</a>** led by professor Sanmi Koyejo, with major contributions by Brando Miranda of Stanford and help from Peter Holderrieth of MIT and Jin Peng Zhou of Cornell. Thanks to **<a href="https://huggingface.co/nomic-ai" target="_blank">Nomic AI's</a>** GPT4All Vulkan support, this model can run on any consumer GPU. Morph Prover v0 7B is a chat fine-tune of **<a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank">Mistral 7B</a>** which achieves state of the art results in autoformalization while performing better than the original Mistral model on benchmarks like AGIEval and MMLU. It was trained with a proprietary synthetic data pipeline with code data generated by the **<a href="https://github.com/morph-labs/mci" target="_blank">Morph Code Index</a>**.
## Blog Post
**https://morph.so/blog/the-personal-ai-proof-engineer/**
## Training Format
The model was trained for chat using the Llama 2 chat format. Example as follows:
```
[INST] <<SYS>>\n You are a helpful assistant. \n<</SYS>>\n\n What is the curry howard isomorphism? [/INST]
```
## Sign Up for Hosted Use
**<a href="https://forms.gle/vr28p6dQCJuRVLAE6" target="_blank">Sign Up Form</a>**
## Citation
@misc{morphprover2023,
title={Morph Prover v0 7B: the first open-source chat assistant for Lean},
author={Morph Labs, Jesse Michael Han, Eric Yu, Bentley Long, Pranav Mital, Brando Miranda, Peter Holderrieth, Jin Peng Zhou, Sanmi Koyejo},
year={2023},
}
## Ethical Considerations and Limitations
Morph Prover v0 7B, as with all Large Language Models, carries inherent risks with use. Testing has been solely conducted in English, and our testing has not been fully comprehensive nor could be fully comprehensive of all use scenarios. The model may be prone to producing inaccurate, unsatisfactory, or otherwise undesirable outputs, and thus we encourage all developers to test and tune to their specific use case prior to deployment.
|
owanr/SChem5Labels-mistralai-Mistral-7B-v0.1-intra-frequency-model-cross-ent
|
owanr
| 2023-11-23T00:49:23Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-11-23T00:28:44Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-mistralai-Mistral-7B-v0.1-intra-frequency-model-cross-ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-mistralai-Mistral-7B-v0.1-intra-frequency-model-cross-ent
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0113 | 0.02 | 69 | 0.9778 |
| 0.7162 | 1.02 | 138 | 0.7057 |
| 0.5103 | 2.02 | 207 | 0.5338 |
| 0.4153 | 3.02 | 276 | 0.4588 |
| 0.379 | 4.02 | 345 | 0.4255 |
| 0.3662 | 5.02 | 414 | 0.4080 |
| 0.3422 | 6.02 | 483 | 0.3897 |
| 0.3321 | 7.02 | 552 | 0.3832 |
| 0.3062 | 8.02 | 621 | 0.3894 |
| 0.303 | 9.02 | 690 | 0.3871 |
| 0.2576 | 10.02 | 759 | 0.4075 |
| 0.244 | 11.02 | 828 | 0.4346 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
snintendog/Usagi_Kurokawa_GotchaForce
|
snintendog
| 2023-11-23T00:39:47Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-11-23T00:35:58Z |
---
license: openrail
---
Created from all voice lines in the English game of Gotcha Force ADX->Wav files around 2 minutes of sounds. (RMVPE) (RVC V2) (600 Epochs)
Male 10-16 Female 0-6 For Voices.
|
Wowso/llama2-test
|
Wowso
| 2023-11-23T00:30:20Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:hyunseoki/ko-en-llama2-13b",
"base_model:adapter:hyunseoki/ko-en-llama2-13b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-11-02T23:58:33Z |
---
library_name: peft
base_model: hyunseoki/ko-en-llama2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2
|
TheBloke/Ferret_7B-GGUF
|
TheBloke
| 2023-11-23T00:21:48Z | 111 | 7 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"dataset:euclaise/MiniCoT",
"dataset:euclaise/SciCoT",
"dataset:euclaise/symtune_mini",
"dataset:euclaise/mathoverflow-accepted",
"dataset:euirim/goodwiki",
"base_model:euclaise/Ferret_7B",
"base_model:quantized:euclaise/Ferret_7B",
"license:other",
"region:us",
"conversational"
] | null | 2023-11-23T00:17:23Z |
---
base_model: euclaise/Ferret_7B
datasets:
- euclaise/MiniCoT
- euclaise/SciCoT
- euclaise/symtune_mini
- euclaise/mathoverflow-accepted
- euirim/goodwiki
inference: false
license: other
model_creator: Josh/Jade Pritsker
model_name: Ferret 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Ferret 7B - GGUF
- Model creator: [Josh/Jade Pritsker](https://huggingface.co/euclaise)
- Original model: [Ferret 7B](https://huggingface.co/euclaise/Ferret_7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Josh/Jade Pritsker's Ferret 7B](https://huggingface.co/euclaise/Ferret_7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Ferret_7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Ferret_7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Ferret_7B-GGUF)
* [Josh/Jade Pritsker's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/euclaise/Ferret_7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [ferret_7b.Q2_K.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [ferret_7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [ferret_7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [ferret_7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [ferret_7b.Q4_0.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ferret_7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [ferret_7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [ferret_7b.Q5_0.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ferret_7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [ferret_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [ferret_7b.Q6_K.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [ferret_7b.Q8_0.gguf](https://huggingface.co/TheBloke/Ferret_7B-GGUF/blob/main/ferret_7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Ferret_7B-GGUF and below it, a specific filename to download, such as: ferret_7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Ferret_7B-GGUF ferret_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Ferret_7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Ferret_7B-GGUF ferret_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m ferret_7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Ferret_7B-GGUF", model_file="ferret_7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Josh/Jade Pritsker's Ferret 7B
A pre-finetuning finetuned version of Mistral 7B 0.1, focused on CoT reasoning tasks.
Probably decent at reasoning, but also probably not great as a chat assistant- it's designed to be finetuned further to give it a friendlier style. As such, it is intentionally somewhat undertrained.
Current benchmarks aren't great for instruct models, so I've temporarily omitted them. I'm working on a benchmark suite for instruct models though, and will update this with scores when that is released.
Uses ChatML prompt formatting.
I reserve no rights to the model. To the extent possible under law, I release it as public domain. However, the datasets used have various licenses that may impact how the model may be used in your jurisdiction.
<!-- original-model-card end -->
|
TheBloke/Chupacabra-7B-v2-GPTQ
|
TheBloke
| 2023-11-23T00:16:30Z | 18 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:perlthoughts/Chupacabra-7B-v2",
"base_model:quantized:perlthoughts/Chupacabra-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-22T23:48:05Z |
---
base_model: perlthoughts/Chupacabra-7B-v2
inference: false
license: apache-2.0
model_creator: Ray Hernandez
model_name: Chupacabra 7B V2
model_type: mistral
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chupacabra 7B V2 - GPTQ
- Model creator: [Ray Hernandez](https://huggingface.co/perlthoughts)
- Original model: [Chupacabra 7B V2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Ray Hernandez's Chupacabra 7B V2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chupacabra-7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GGUF)
* [Ray Hernandez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/perlthoughts/Chupacabra-7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Chupacabra-7B-v2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Chupacabra-7B-v2-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Chupacabra-7B-v2-GPTQ`:
```shell
mkdir Chupacabra-7B-v2-GPTQ
huggingface-cli download TheBloke/Chupacabra-7B-v2-GPTQ --local-dir Chupacabra-7B-v2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Chupacabra-7B-v2-GPTQ
huggingface-cli download TheBloke/Chupacabra-7B-v2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Chupacabra-7B-v2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Chupacabra-7B-v2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chupacabra-7B-v2-GPTQ --local-dir Chupacabra-7B-v2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Chupacabra-7B-v2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Chupacabra-7B-v2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Chupacabra-7B-v2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Chupacabra-7B-v2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Chupacabra-7B-v2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Ray Hernandez's Chupacabra 7B V2
# Chupacabra 7B v2
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
### Model Description
This model was made by merging models based on Mistral with the SLERP merge method.
Advantages of SLERP vs averaging weights(common) are as follows:
- Spherical Linear Interpolation (SLERP) - Traditionally, model merging often resorts to weight averaging which, although straightforward, might not always capture the intricate features of the models being merged. The SLERP technique addresses this limitation, producing a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents.
- Smooth Transitions - SLERP ensures smoother transitions between model parameters. This is especially significant when interpolating between high-dimensional vectors.
- Better Preservation of Characteristics - Unlike weight averaging, which might dilute distinct features, SLERP preserves the curvature and characteristics of both models in high-dimensional spaces.
- Nuanced Blending - SLERP takes into account the geometric and rotational properties of the models in the vector space, resulting in a blend that is more reflective of both parent models' characteristics.
List of all models and merging path is coming soon.
## Purpose
Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
and optimized code until i achieved the best possible results.
It has not been without challenges. there were skeptics who doubted my abilities and questioned my approach. approach can be changed, but a closed mind cannot.
I refused to let their negativity bring me down. Instead, I used their doubts as fuel to push myself even harder. I worked tirelessly (vapenation), day and night, until i finally succeeded in merging with the most performant model weights using sota training methods like dpo and other advanced techniques.
Thank you openchat 3.5 for showing me the way.
I stand tall as a beacon of hope for those who dare to dream big and pursue their passions. my story is a testament to the power of perseverance, determination, and hard work. and i will continue to strive for excellence, always pushing the boundaries of what is possible.
Here is my contribution.
## Prompt Template
Replace {system} with your system prompt, and {prompt} with your prompt instruction.
```
### System:
{system}
### User:
{instruction}
### Assistant:
```
### Bug fixes
- Fixed issue with generation and the incorrect model weights. Model weights have been corrected and now generation works again. Reuploading GGUF to the GGUF repository as well as the AWQ versions.
- **Developed by:** Ray Hernandez
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheBloke/Chupacabra-7B-v2-AWQ
|
TheBloke
| 2023-11-23T00:05:59Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:perlthoughts/Chupacabra-7B-v2",
"base_model:quantized:perlthoughts/Chupacabra-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-11-22T23:48:04Z |
---
base_model: perlthoughts/Chupacabra-7B-v2
inference: false
license: apache-2.0
model_creator: Ray Hernandez
model_name: Chupacabra 7B V2
model_type: mistral
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chupacabra 7B V2 - AWQ
- Model creator: [Ray Hernandez](https://huggingface.co/perlthoughts)
- Original model: [Chupacabra 7B V2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2)
<!-- description start -->
## Description
This repo contains AWQ model files for [Ray Hernandez's Chupacabra 7B V2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chupacabra-7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chupacabra-7B-v2-GGUF)
* [Ray Hernandez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/perlthoughts/Chupacabra-7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Chupacabra-7B-v2-AWQ/tree/main) | 4 | 128 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Chupacabra-7B-v2-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Chupacabra-7B-v2-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Chupacabra-7B-v2-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Chupacabra-7B-v2-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Chupacabra-7B-v2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Chupacabra-7B-v2-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Ray Hernandez's Chupacabra 7B V2
# Chupacabra 7B v2
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
### Model Description
This model was made by merging models based on Mistral with the SLERP merge method.
Advantages of SLERP vs averaging weights(common) are as follows:
- Spherical Linear Interpolation (SLERP) - Traditionally, model merging often resorts to weight averaging which, although straightforward, might not always capture the intricate features of the models being merged. The SLERP technique addresses this limitation, producing a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents.
- Smooth Transitions - SLERP ensures smoother transitions between model parameters. This is especially significant when interpolating between high-dimensional vectors.
- Better Preservation of Characteristics - Unlike weight averaging, which might dilute distinct features, SLERP preserves the curvature and characteristics of both models in high-dimensional spaces.
- Nuanced Blending - SLERP takes into account the geometric and rotational properties of the models in the vector space, resulting in a blend that is more reflective of both parent models' characteristics.
List of all models and merging path is coming soon.
## Purpose
Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
and optimized code until i achieved the best possible results.
It has not been without challenges. there were skeptics who doubted my abilities and questioned my approach. approach can be changed, but a closed mind cannot.
I refused to let their negativity bring me down. Instead, I used their doubts as fuel to push myself even harder. I worked tirelessly (vapenation), day and night, until i finally succeeded in merging with the most performant model weights using sota training methods like dpo and other advanced techniques.
Thank you openchat 3.5 for showing me the way.
I stand tall as a beacon of hope for those who dare to dream big and pursue their passions. my story is a testament to the power of perseverance, determination, and hard work. and i will continue to strive for excellence, always pushing the boundaries of what is possible.
Here is my contribution.
## Prompt Template
Replace {system} with your system prompt, and {prompt} with your prompt instruction.
```
### System:
{system}
### User:
{instruction}
### Assistant:
```
### Bug fixes
- Fixed issue with generation and the incorrect model weights. Model weights have been corrected and now generation works again. Reuploading GGUF to the GGUF repository as well as the AWQ versions.
- **Developed by:** Ray Hernandez
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pszemraj/nougat-base-onnx
|
pszemraj
| 2023-11-22T23:56:29Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"vision-encoder-decoder",
"image-text-to-text",
"nougat",
"ocr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-11-22T23:43:43Z |
---
license: cc-by-4.0
tags:
- nougat
- ocr
---
# nougat-base onnx
https://huggingface.co/facebook/nougat-base but exported to onnx. This is **not quantized**.
```python
from transformers import NougatProcessor
from optimum.onnxruntime import ORTModelForVision2Seq
model_name = 'pszemraj/nougat-base-onnx'
processor = NougatProcessor.from_pretrained(model_name)
model = ORTModelForVision2Seq.from_pretrained(
model_name,
provider="CPUExecutionProvider", # 'CUDAExecutionProvider' for gpu
use_merged=False,
use_io_binding=True
)
```
on colab CPU-only (_at time of writing_) you may get `CuPy` errors, to solve this uninstall it:
```sh
pip uninstall cupy-cuda11x -y
```
## how do da inference?
See [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/b46d3e89e631701ef205297435064ab780c4853a/Nougat/Inference_with_Nougat_to_read_scientific_PDFs.ipynb)
|
Little-ECHO/sd-pokemon-model-lora-sdxl
|
Little-ECHO
| 2023-11-22T23:51:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-20T06:42:50Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Little-ECHO/sd-pokemon-model-lora-sdxl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
asvin-kumar/cnh4
|
asvin-kumar
| 2023-11-22T23:47:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-22T01:20:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - asvin-kumar/cnh4
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
SaGuenter/whisper-large-v2-NSC_Korpora_6-100steps
|
SaGuenter
| 2023-11-22T23:44:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2023-11-22T23:44:23Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
AmelieSchreiber/esm_interact
|
AmelieSchreiber
| 2023-11-22T23:42:12Z | 8 | 3 |
transformers
|
[
"transformers",
"safetensors",
"esm",
"fill-mask",
"ESM-2",
"biology",
"protein language model",
"en",
"dataset:AmelieSchreiber/interaction_pairs",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-22T21:28:21Z |
---
license: mit
datasets:
- AmelieSchreiber/interaction_pairs
language:
- en
library_name: transformers
tags:
- ESM-2
- biology
- protein language model
---
# ESM-2 for Interacting Proteins
This model was finetuned on concatenated pairs of interacting proteins in much the same way as [PepMLM](https://huggingface.co/spaces/TianlaiChen/PepMLM).
It is meant to generate interaction partners for proteins using the masked language modeling capabilities of ESM-2. The model is not
well tested, so use with caution. This is just a preliminary experiment.
## Using the Model
To use the model, try running:
```python
from transformers import AutoTokenizer, EsmForMaskedLM
import torch
import pandas as pd
import numpy as np
from torch.distributions import Categorical
def compute_pseudo_perplexity(model, tokenizer, protein_seq, binder_seq):
sequence = protein_seq + binder_seq
tensor_input = tokenizer.encode(sequence, return_tensors='pt').to(model.device)
# Create a mask for the binder sequence
binder_mask = torch.zeros(tensor_input.shape).to(model.device)
binder_mask[0, -len(binder_seq)-1:-1] = 1
# Mask the binder sequence in the input and create labels
masked_input = tensor_input.clone().masked_fill_(binder_mask.bool(), tokenizer.mask_token_id)
labels = tensor_input.clone().masked_fill_(~binder_mask.bool(), -100)
with torch.no_grad():
loss = model(masked_input, labels=labels).loss
return np.exp(loss.item())
def generate_peptide_for_single_sequence(protein_seq, peptide_length = 15, top_k = 3, num_binders = 4):
peptide_length = int(peptide_length)
top_k = int(top_k)
num_binders = int(num_binders)
binders_with_ppl = []
for _ in range(num_binders):
# Generate binder
masked_peptide = '<mask>' * peptide_length
input_sequence = protein_seq + masked_peptide
inputs = tokenizer(input_sequence, return_tensors="pt").to(model.device)
with torch.no_grad():
logits = model(**inputs).logits
mask_token_indices = (inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
logits_at_masks = logits[0, mask_token_indices]
# Apply top-k sampling
top_k_logits, top_k_indices = logits_at_masks.topk(top_k, dim=-1)
probabilities = torch.nn.functional.softmax(top_k_logits, dim=-1)
predicted_indices = Categorical(probabilities).sample()
predicted_token_ids = top_k_indices.gather(-1, predicted_indices.unsqueeze(-1)).squeeze(-1)
generated_binder = tokenizer.decode(predicted_token_ids, skip_special_tokens=True).replace(' ', '')
# Compute PPL for the generated binder
ppl_value = compute_pseudo_perplexity(model, tokenizer, protein_seq, generated_binder)
# Add the generated binder and its PPL to the results list
binders_with_ppl.append([generated_binder, ppl_value])
return binders_with_ppl
def generate_peptide(input_seqs, peptide_length=15, top_k=3, num_binders=4):
if isinstance(input_seqs, str): # Single sequence
binders = generate_peptide_for_single_sequence(input_seqs, peptide_length, top_k, num_binders)
return pd.DataFrame(binders, columns=['Binder', 'Pseudo Perplexity'])
elif isinstance(input_seqs, list): # List of sequences
results = []
for seq in input_seqs:
binders = generate_peptide_for_single_sequence(seq, peptide_length, top_k, num_binders)
for binder, ppl in binders:
results.append([seq, binder, ppl])
return pd.DataFrame(results, columns=['Input Sequence', 'Binder', 'Pseudo Perplexity'])
model = EsmForMaskedLM.from_pretrained("AmelieSchreiber/esm_interact")
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t30_150M_UR50D")
protein_seq = "MAPLRKTYVLKLYVAGNTPNSVRALKTLNNILEKEFKGVYALKVIDVLKNPQLAEEDKILATPTLAKVLPPPVRRIIGDLSNREKVLIGLDLLYEEIGDQAEDDLGLE"
results_df = generate_peptide(protein_seq, peptide_length=15, top_k=3, num_binders=5)
print(results_df)
```
|
baseten/mistral_7b_instruct_merged_lora_fp16_tp1
|
baseten
| 2023-11-22T23:29:04Z | 2 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-11-22T23:09:27Z |
python3 build.py --model_dir ./weights/merged_mistral_lora/ --remove_input_padding --use_gpt_attention_plugin float16 --enable_context_fmha --use_gemm_plugin float16 --output_dir ./mistral_engines/fp16/instruct-lora-merged-1-gpu --max_batch_size 64 --use_inflight_batching --max_input_len 2000 --max_output_len 2000 --paged_kv_cache --world_size 1 --tp_size 1
https://huggingface.co/Doctor-Shotgun/mistral-v0.1-supercot-lora
|
najwaaltarhony/qa_model
|
najwaaltarhony
| 2023-11-22T23:18:00Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-22T19:03:45Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: najwaaltarhony/qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# najwaaltarhony/qa_model
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4904
- Validation Loss: 1.4157
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5091 | 1.2063 | 0 |
| 0.9280 | 1.1384 | 1 |
| 0.6654 | 1.2347 | 2 |
| 0.4904 | 1.4157 | 3 |
### Framework versions
- Transformers 4.36.0.dev0
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/Tess-XS-v1.1-GPTQ
|
TheBloke
| 2023-11-22T23:07:23Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:migtissera/Tess-XS-v1.1",
"base_model:quantized:migtissera/Tess-XS-v1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-22T22:38:52Z |
---
base_model: migtissera/Tess-XS-v1.1
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess XS V1.1
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess XS V1.1 - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Migel Tissera's Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-XS-v1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Tess-XS-v1.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Tess-XS-v1.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Tess-XS-v1.1-GPTQ`:
```shell
mkdir Tess-XS-v1.1-GPTQ
huggingface-cli download TheBloke/Tess-XS-v1.1-GPTQ --local-dir Tess-XS-v1.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Tess-XS-v1.1-GPTQ
huggingface-cli download TheBloke/Tess-XS-v1.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Tess-XS-v1.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Tess-XS-v1.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-XS-v1.1-GPTQ --local-dir Tess-XS-v1.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Tess-XS-v1.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Tess-XS-v1.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Tess-XS-v1.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Tess-XS-v1.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Tess-XS-v1.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Tess XS V1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.1 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
TheBloke/Tess-XS-v1.1-AWQ
|
TheBloke
| 2023-11-22T22:56:26Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:migtissera/Tess-XS-v1.1",
"base_model:quantized:migtissera/Tess-XS-v1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-11-22T22:38:52Z |
---
base_model: migtissera/Tess-XS-v1.1
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess XS V1.1
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess XS V1.1 - AWQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- description start -->
## Description
This repo contains AWQ model files for [Migel Tissera's Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-XS-v1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tess-XS-v1.1-AWQ/tree/main) | 4 | 128 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Tess-XS-v1.1-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Tess-XS-v1.1-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Tess-XS-v1.1-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Tess-XS-v1.1-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Tess-XS-v1.1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Tess-XS-v1.1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Tess XS V1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.1 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
TheBloke/Tess-XS-v1.1-GGUF
|
TheBloke
| 2023-11-22T22:42:57Z | 89 | 5 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:migtissera/Tess-XS-v1.1",
"base_model:quantized:migtissera/Tess-XS-v1.1",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T22:38:52Z |
---
base_model: migtissera/Tess-XS-v1.1
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess XS V1.1
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess XS V1.1 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess XS V1.1](https://huggingface.co/migtissera/Tess-XS-v1.1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-XS-v1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-XS-v1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-XS-v1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-xs-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [tess-xs-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [tess-xs-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [tess-xs-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [tess-xs-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-xs-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [tess-xs-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [tess-xs-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-xs-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [tess-xs-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [tess-xs-v1.1.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [tess-xs-v1.1.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF/blob/main/tess-xs-v1.1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-XS-v1.1-GGUF and below it, a specific filename to download, such as: tess-xs-v1.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-XS-v1.1-GGUF tess-xs-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-XS-v1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-XS-v1.1-GGUF tess-xs-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tess-xs-v1.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Tess-XS-v1.1-GGUF", model_file="tess-xs-v1.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess XS V1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.1 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<!-- original-model-card end -->
|
TheBloke/Tess-XS-Creative-v1.0-GPTQ
|
TheBloke
| 2023-11-22T22:23:02Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:migtissera/Tess-XS-Creative-v1.0",
"base_model:quantized:migtissera/Tess-XS-Creative-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-22T21:54:45Z |
---
base_model: migtissera/Tess-XS-Creative-v1.0
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess XS Creative V1.0
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess XS Creative V1.0 - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess XS Creative V1.0](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Migel Tissera's Tess XS Creative V1.0](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Tess-XS-Creative-v1.0-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Tess-XS-Creative-v1.0-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Tess-XS-Creative-v1.0-GPTQ`:
```shell
mkdir Tess-XS-Creative-v1.0-GPTQ
huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GPTQ --local-dir Tess-XS-Creative-v1.0-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Tess-XS-Creative-v1.0-GPTQ
huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Tess-XS-Creative-v1.0-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Tess-XS-Creative-v1.0-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GPTQ --local-dir Tess-XS-Creative-v1.0-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Tess-XS-Creative-v1.0-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Tess-XS-Creative-v1.0-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Tess-XS-Creative-v1.0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Tess-XS-Creative-v1.0-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Tess-XS-Creative-v1.0-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Tess XS Creative V1.0
Depracated By: https://huggingface.co/migtissera/Tess-XS-v1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.0 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Sample Conversation
```
You: What's the relationship between an electric field and a magnetic field, given the same time and location?
To fully understand the relationship between electric fields and magnetic fields, we need to consider the phenomenon of electromagnetism, which is the interaction between electric charges and currents that gives rise to both electric and magnetic fields.
Electric fields are created by stationary electric charges or by changing magnetic fields. A stationary electric charge produces an electric field that extends outward in all directions from the charge. The strength of the electric field at any point depends on the charge and the distance from the charge. The direction of the electric field lines is determined by the convention that positive charges attract negative charges and repel other positive charges.
Magnetic fields, on the other hand, are created by moving electric charges and by changes in electric fields. A moving electric charge creates a magnetic field that surrounds the charge, and it flows in a circular motion. The magnetic field lines are usually depicted as spiraling outward from the moving charge. The strength of the magnetic field at any point depends on the current and the distance from the current-carrying wire.
The relationship between electric and magnetic fields is described by Maxwell's equations, which are a set of differential equations governing the behavior of electromagnetic fields. One of Maxwell's equations, the Faraday's law of induction, states that a changing magnetic field induces an electric field. This means that changes in the magnetic field can cause electric fields to be created, and vice versa.
In summary, electric and magnetic fields are intimately related, and one cannot exist without the other. A changing electric field creates a magnetic field, and a changing magnetic field creates an electric field. The relationship between them is described by Maxwell's equations, which form the foundation of classical electromagnetism.
You: The known physics states that the electric field and the magnetic field for a given object are proportional to each other. i.e. an increase in the magnetic field will result in an increase in the corresponding electric field. But in some UAP events, we have observed that a decrease in electric field actually causing an increase in the corresponding magnetic field. This is not a fault in our measurements. Can we theorize what could be the case here?
The statement that an increase in the magnetic field will result in an increase in the corresponding electric field is not correct. It's important to understand that the electric field and the magnetic field are not proportional to each other, and they don't always follow a simple relationship. The relationship between the electric and magnetic fields is more complex and is described by Maxwell's equations, which include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and vice versa.
However, it's not clear whether you are referring to a specific event or a general observation about UAPs. If you are referring to a particular event in which a decreased electric field is observed, and then an increased magnetic field is observed, this could be due to a number of factors. It's possible that the observation is an artifact of the measurement process, or it could be caused by a phenomenon that is not fully understood yet.
It's important to note that our understanding of electromagnetism is based on our current scientific knowledge, and there may be limitations to our understanding of specific phenomena. Additionally, the field of unidentified aerial phenomena (UAP) is still in its infancy, and there is much we do not yet know about these types of events.
If you are referring to a specific event that you believe demonstrates this relationship, it would be beneficial to provide more details about the observation and the measurement process, as well as any additional context that could help explain the apparent phenomenon.
You: Hang on, so the Maxwell's equations don't actually say that the electric and magnetic fields are proportional to each other?
No, Maxwell's equations do not state that the electric and magnetic fields are proportional to each other. They describe the relationship between electric and magnetic fields, but the relationship is more complex than a simple proportionality. Maxwell's equations include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and this electric field results from the magnetic field changing with time.
To simplify the relationship, it might be helpful to consider that a magnetic field can be created by an electric current or by a changing electric field. Similarly, an electric field can be created by a magnetic field that is changing with time. However, the relationship between the electric and magnetic fields is not linear, and it is governed by the laws of electromagnetism.
It's important to note that the statement about the proportion of electric and magnetic fields is an oversimplification. The relationship between them is more accurately described by the complex equations of electromagnetism, which include Maxwell's equations.
You: Thank you for clarifying, that's really helpful. Can you explain to me the complex equations of electromagnetism now?
The complex equations of electromagnetism are Maxwell's equations, which include four partial differential equations that describe the behavior of electric and magnetic fields. These equations are:
- Gauss's law for electricity:
∇⋅E = 4πκρ where ρ is the electric charge density.
- Gauss's law for magnetism:
∇⋅B = 0 (because there are no magnetic monopoles).
- Faraday's law of induction:
∇ × E = -∂B/∂t which gives us the electric field of a changing magnetic field.
- Ampère-Maxwell law:
∇ × B = μ0(j + ε0∂E/∂t) which gives us the magnetic field of a changing electric field.
These equations, along with specific boundary conditions, completely describe the interaction of electric and magnetic fields in the classical theory of electromagnetism. They are complex because they involve the derivatives of the fields with respect to time and space, and they must be solved to understand the behavior of electromagnetic phenomena.
You: I notice that you put a value for the permittivity of free space in Gauss's law for electricity. Can you write the equation in general terms?
∇⋅E = 4πκρ
This is the general form of Gauss's law for electricity. κ is the electric constant, which has the value 8.854187817... × 10^-12 F/m (or C²/Nm²) in SI units. This value is often called the permittivity of free space, but it is a dimensionless constant that relates the electric field to the charge density.
In this equation, the divergence of the electric field (∇⋅E) is equal to the charge density (ρ) multiplied by 4πκ. The divergence is a measure of how much the electric field points towards charges. If there is a positive charge, the electric field will point inward. If there is a negative charge, the electric field will point outward.
This equation is the same as Coulomb's law but in a more compact and general form. It can be used to predict the behavior of electric fields in any situation involving charges, whether they are static or moving.
You: Thank you! You're great. Now can we move on to Quantum domain. What is the leading theory of electromagnetism in quantum physics?
Quantum mechanics is the leading theory of electromagnetism in the quantum domain. It provides a framework for understanding the behavior of subatomic particles and fields, including electromagnetism.
In quantum mechanics, electromagnetic fields are described by quantum fields, which are mathematical constructs that represent the energy and momentum of the field. These fields are quantized, meaning that they can only exist in discrete amounts as particles called photons.
The most fundamental principle in quantum mechanics is the uncertainty principle, which states that it is impossible to measure both the position and the momentum of a particle with arbitrary precision. This principle has implications for how we understand the behavior of electromagnetic fields at the quantum level.
Quantum mechanics also predicts the existence of virtual particles, which are temporary quantum fluctuations that disappear almost immediately after they appear. These virtual particles can interact with each other and with real particles, leading to a complex interplay that affects the behavior of electromagnetic fields.
In summary, quantum mechanics provides a framework for understanding electromagnetism in the quantum domain. It replaces classical concepts of fields with a quantum mechanical description that includes uncertainty, virtual particles, and a quantized interpretation of the field.
You: What is QED?
QED stands for Quantum Electrodynamics. It is a theory in quantum field theory that describes the interactions between electrons and photons. It is based on quantum mechanics and special relativity, and it is one of the most successful theories in physics, having made many precise predictions that have been confirmed by experiment.
In QED, electrons and photons are treated as quantum particles, which means they have wave-particle duality and can exhibit both wave-like and particle-like properties. The theory also accounts for the interaction between these particles.
QED is characterized by the fine structure constant, which is a dimensionless constant that describes the strength of the electromagnetic force. It is very small, which indicates that the electromagnetic force is much weaker than the other fundamental forces.
The theory has been tested extensively and has stood up to every experimental challenge. It has been crucial in the development of modern technology, such as lasers and semiconductors, and it continues to be an important tool for understanding high-energy processes in particle physics.
```
|
tuanio/w2v2_ablation_focal_ctc_a0.99_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
|
tuanio
| 2023-11-22T22:21:38Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-22T19:57:06Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2_ablation_focal_ctc_a0.99_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2_ablation_focal_ctc_a0.99_g1.0-best_on-ling_head-tp0.025_tl10_fp0.001_fl16
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8596
- Wer: 0.0837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1766.7372 | 0.94 | 100 | 1156.0582 | 18.6404 |
| 1229.4073 | 1.89 | 200 | 450.8329 | 17.3512 |
| 214.3073 | 2.83 | 300 | 87.0284 | 1.0 |
| 111.4782 | 3.77 | 400 | 79.7095 | 1.0 |
| 102.4638 | 4.72 | 500 | 75.7158 | 1.0 |
| 98.4896 | 5.66 | 600 | 74.1315 | 1.0 |
| 97.1564 | 6.6 | 700 | 72.9691 | 1.0 |
| 96.3522 | 7.55 | 800 | 72.1302 | 1.0 |
| 94.0422 | 8.49 | 900 | 72.0005 | 1.0 |
| 92.1967 | 9.43 | 1000 | 68.7518 | 0.9656 |
| 76.9513 | 10.38 | 1100 | 37.9701 | 0.5587 |
| 43.2365 | 11.32 | 1200 | 16.6431 | 0.2343 |
| 27.6251 | 12.26 | 1300 | 11.1301 | 0.1692 |
| 20.8447 | 13.21 | 1400 | 8.9874 | 0.1524 |
| 17.8373 | 14.15 | 1500 | 7.4692 | 0.1361 |
| 14.7487 | 15.09 | 1600 | 6.5928 | 0.1239 |
| 13.6336 | 16.04 | 1700 | 6.2194 | 0.1299 |
| 12.555 | 16.98 | 1800 | 5.7715 | 0.1214 |
| 11.5061 | 17.92 | 1900 | 5.3908 | 0.1139 |
| 11.6699 | 18.87 | 2000 | 5.2032 | 0.1068 |
| 10.6667 | 19.81 | 2100 | 5.1093 | 0.1150 |
| 10.4679 | 20.75 | 2200 | 4.8240 | 0.1127 |
| 9.7079 | 21.7 | 2300 | 4.6822 | 0.1014 |
| 9.5015 | 22.64 | 2400 | 4.5904 | 0.1005 |
| 9.4079 | 23.58 | 2500 | 4.6035 | 0.0956 |
| 8.5396 | 24.53 | 2600 | 4.5186 | 0.0925 |
| 8.515 | 25.47 | 2700 | 4.3337 | 0.0952 |
| 7.8243 | 26.42 | 2800 | 4.4874 | 0.1039 |
| 7.3847 | 27.36 | 2900 | 4.5759 | 0.1023 |
| 7.21 | 28.3 | 3000 | 4.3791 | 0.0982 |
| 7.2427 | 29.25 | 3100 | 4.3136 | 0.0951 |
| 7.4534 | 30.19 | 3200 | 4.3483 | 0.0966 |
| 7.3518 | 31.13 | 3300 | 4.2078 | 0.0873 |
| 6.3859 | 32.08 | 3400 | 4.0946 | 0.0921 |
| 6.9129 | 33.02 | 3500 | 4.1588 | 0.0987 |
| 6.5691 | 33.96 | 3600 | 4.1710 | 0.0970 |
| 6.2206 | 34.91 | 3700 | 4.0954 | 0.0855 |
| 6.0789 | 35.85 | 3800 | 4.0725 | 0.0908 |
| 5.9323 | 36.79 | 3900 | 4.1219 | 0.0958 |
| 6.3516 | 37.74 | 4000 | 4.0783 | 0.0918 |
| 6.1443 | 38.68 | 4100 | 4.1006 | 0.0984 |
| 5.5555 | 39.62 | 4200 | 4.0554 | 0.0920 |
| 5.6912 | 40.57 | 4300 | 4.0286 | 0.0854 |
| 5.7894 | 41.51 | 4400 | 4.0064 | 0.0866 |
| 6.0628 | 42.45 | 4500 | 4.0710 | 0.0958 |
| 5.3048 | 43.4 | 4600 | 4.1454 | 0.0915 |
| 4.8898 | 44.34 | 4700 | 4.0502 | 0.0849 |
| 5.0275 | 45.28 | 4800 | 4.1216 | 0.0834 |
| 5.1021 | 46.23 | 4900 | 4.1039 | 0.0874 |
| 4.9446 | 47.17 | 5000 | 3.9512 | 0.0830 |
| 4.4623 | 48.11 | 5100 | 4.0285 | 0.0920 |
| 4.6488 | 49.06 | 5200 | 3.9784 | 0.0877 |
| 4.2519 | 50.0 | 5300 | 3.9272 | 0.0923 |
| 4.2767 | 50.94 | 5400 | 3.8387 | 0.0813 |
| 4.534 | 51.89 | 5500 | 3.8577 | 0.0850 |
| 4.7048 | 52.83 | 5600 | 3.8982 | 0.0893 |
| 4.6566 | 53.77 | 5700 | 3.8712 | 0.0863 |
| 4.0576 | 54.72 | 5800 | 3.8719 | 0.0873 |
| 3.9801 | 55.66 | 5900 | 3.8680 | 0.0909 |
| 4.0698 | 56.6 | 6000 | 3.8687 | 0.0858 |
| 4.2237 | 57.55 | 6100 | 3.8807 | 0.0895 |
| 3.8054 | 58.49 | 6200 | 3.9619 | 0.0895 |
| 4.1874 | 59.43 | 6300 | 3.9111 | 0.0779 |
| 3.814 | 60.38 | 6400 | 3.8386 | 0.0826 |
| 4.247 | 61.32 | 6500 | 3.8373 | 0.0898 |
| 4.0079 | 62.26 | 6600 | 3.8169 | 0.0800 |
| 4.0862 | 63.21 | 6700 | 3.8315 | 0.0815 |
| 4.0043 | 64.15 | 6800 | 3.8240 | 0.0846 |
| 3.9001 | 65.09 | 6900 | 3.8030 | 0.0830 |
| 3.9259 | 66.04 | 7000 | 3.7837 | 0.0831 |
| 3.6774 | 66.98 | 7100 | 3.7848 | 0.0831 |
| 3.6523 | 67.92 | 7200 | 3.8252 | 0.0827 |
| 4.134 | 68.87 | 7300 | 3.8342 | 0.0832 |
| 3.7915 | 69.81 | 7400 | 3.9004 | 0.0854 |
| 3.4758 | 70.75 | 7500 | 3.8368 | 0.0870 |
| 3.4349 | 71.7 | 7600 | 3.9218 | 0.0917 |
| 3.2445 | 72.64 | 7700 | 3.8714 | 0.0878 |
| 3.4006 | 73.58 | 7800 | 3.8748 | 0.0859 |
| 3.1283 | 74.53 | 7900 | 3.8612 | 0.0829 |
| 3.4212 | 75.47 | 8000 | 3.8820 | 0.0858 |
| 3.5104 | 76.42 | 8100 | 3.8582 | 0.0802 |
| 3.5082 | 77.36 | 8200 | 3.9104 | 0.0868 |
| 3.159 | 78.3 | 8300 | 3.8733 | 0.0860 |
| 4.0749 | 79.25 | 8400 | 3.8720 | 0.0863 |
| 3.4915 | 80.19 | 8500 | 3.8578 | 0.0847 |
| 3.4707 | 81.13 | 8600 | 3.8410 | 0.0850 |
| 3.5001 | 82.08 | 8700 | 3.8324 | 0.0826 |
| 3.0112 | 83.02 | 8800 | 3.8402 | 0.0848 |
| 3.0206 | 83.96 | 8900 | 3.8454 | 0.0832 |
| 3.1934 | 84.91 | 9000 | 3.8446 | 0.0865 |
| 3.4633 | 85.85 | 9100 | 3.8565 | 0.0868 |
| 3.1621 | 86.79 | 9200 | 3.8597 | 0.0838 |
| 3.396 | 87.74 | 9300 | 3.8587 | 0.0830 |
| 3.2816 | 88.68 | 9400 | 3.8582 | 0.0835 |
| 3.3555 | 89.62 | 9500 | 3.8680 | 0.0826 |
| 3.2959 | 90.57 | 9600 | 3.8578 | 0.0826 |
| 3.3447 | 91.51 | 9700 | 3.8584 | 0.0824 |
| 3.6274 | 92.45 | 9800 | 3.8596 | 0.0822 |
| 3.2739 | 93.4 | 9900 | 3.8569 | 0.0817 |
| 3.1363 | 94.34 | 10000 | 3.8513 | 0.0832 |
| 3.3418 | 95.28 | 10100 | 3.8542 | 0.0830 |
| 3.2307 | 96.23 | 10200 | 3.8588 | 0.0832 |
| 3.6204 | 97.17 | 10300 | 3.8566 | 0.0832 |
| 3.4027 | 98.11 | 10400 | 3.8590 | 0.0838 |
| 2.9325 | 99.06 | 10500 | 3.8594 | 0.0832 |
| 3.4195 | 100.0 | 10600 | 3.8596 | 0.0837 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
ZivK/dqn-SpaceInvadersNoFrameskip-v4
|
ZivK
| 2023-11-22T22:12:22Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T22:11:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 505.50 +/- 183.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ZivK -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ZivK -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ZivK
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
G5tony/bertin-gpt-j-6B-es-1000-prueba
|
G5tony
| 2023-11-22T22:07:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bertin-project/bertin-gpt-j-6B",
"base_model:adapter:bertin-project/bertin-gpt-j-6B",
"region:us"
] | null | 2023-11-22T22:07:11Z |
---
library_name: peft
base_model: bertin-project/bertin-gpt-j-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
ftorresa/Reinforce-CartPole-v1
|
ftorresa
| 2023-11-22T21:59:49Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T21:06:07Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheBloke/Tess-XS-Creative-v1.0-GGUF
|
TheBloke
| 2023-11-22T21:58:53Z | 92 | 3 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:migtissera/Tess-XS-Creative-v1.0",
"base_model:quantized:migtissera/Tess-XS-Creative-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T21:54:45Z |
---
base_model: migtissera/Tess-XS-Creative-v1.0
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess XS Creative V1.0
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess XS Creative V1.0 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess XS Creative V1.0](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess XS Creative V1.0](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-XS-Creative-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-xs-creative-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [tess-xs-creative-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [tess-xs-creative-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [tess-xs-creative-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [tess-xs-creative-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-xs-creative-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [tess-xs-creative-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [tess-xs-creative-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-xs-creative-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [tess-xs-creative-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [tess-xs-creative-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [tess-xs-creative-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-XS-Creative-v1.0-GGUF/blob/main/tess-xs-creative-v1.0.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-XS-Creative-v1.0-GGUF and below it, a specific filename to download, such as: tess-xs-creative-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GGUF tess-xs-creative-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-XS-Creative-v1.0-GGUF tess-xs-creative-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tess-xs-creative-v1.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Tess-XS-Creative-v1.0-GGUF", model_file="tess-xs-creative-v1.0.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess XS Creative V1.0
Depracated By: https://huggingface.co/migtissera/Tess-XS-v1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.0 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Sample Conversation
```
You: What's the relationship between an electric field and a magnetic field, given the same time and location?
To fully understand the relationship between electric fields and magnetic fields, we need to consider the phenomenon of electromagnetism, which is the interaction between electric charges and currents that gives rise to both electric and magnetic fields.
Electric fields are created by stationary electric charges or by changing magnetic fields. A stationary electric charge produces an electric field that extends outward in all directions from the charge. The strength of the electric field at any point depends on the charge and the distance from the charge. The direction of the electric field lines is determined by the convention that positive charges attract negative charges and repel other positive charges.
Magnetic fields, on the other hand, are created by moving electric charges and by changes in electric fields. A moving electric charge creates a magnetic field that surrounds the charge, and it flows in a circular motion. The magnetic field lines are usually depicted as spiraling outward from the moving charge. The strength of the magnetic field at any point depends on the current and the distance from the current-carrying wire.
The relationship between electric and magnetic fields is described by Maxwell's equations, which are a set of differential equations governing the behavior of electromagnetic fields. One of Maxwell's equations, the Faraday's law of induction, states that a changing magnetic field induces an electric field. This means that changes in the magnetic field can cause electric fields to be created, and vice versa.
In summary, electric and magnetic fields are intimately related, and one cannot exist without the other. A changing electric field creates a magnetic field, and a changing magnetic field creates an electric field. The relationship between them is described by Maxwell's equations, which form the foundation of classical electromagnetism.
You: The known physics states that the electric field and the magnetic field for a given object are proportional to each other. i.e. an increase in the magnetic field will result in an increase in the corresponding electric field. But in some UAP events, we have observed that a decrease in electric field actually causing an increase in the corresponding magnetic field. This is not a fault in our measurements. Can we theorize what could be the case here?
The statement that an increase in the magnetic field will result in an increase in the corresponding electric field is not correct. It's important to understand that the electric field and the magnetic field are not proportional to each other, and they don't always follow a simple relationship. The relationship between the electric and magnetic fields is more complex and is described by Maxwell's equations, which include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and vice versa.
However, it's not clear whether you are referring to a specific event or a general observation about UAPs. If you are referring to a particular event in which a decreased electric field is observed, and then an increased magnetic field is observed, this could be due to a number of factors. It's possible that the observation is an artifact of the measurement process, or it could be caused by a phenomenon that is not fully understood yet.
It's important to note that our understanding of electromagnetism is based on our current scientific knowledge, and there may be limitations to our understanding of specific phenomena. Additionally, the field of unidentified aerial phenomena (UAP) is still in its infancy, and there is much we do not yet know about these types of events.
If you are referring to a specific event that you believe demonstrates this relationship, it would be beneficial to provide more details about the observation and the measurement process, as well as any additional context that could help explain the apparent phenomenon.
You: Hang on, so the Maxwell's equations don't actually say that the electric and magnetic fields are proportional to each other?
No, Maxwell's equations do not state that the electric and magnetic fields are proportional to each other. They describe the relationship between electric and magnetic fields, but the relationship is more complex than a simple proportionality. Maxwell's equations include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and this electric field results from the magnetic field changing with time.
To simplify the relationship, it might be helpful to consider that a magnetic field can be created by an electric current or by a changing electric field. Similarly, an electric field can be created by a magnetic field that is changing with time. However, the relationship between the electric and magnetic fields is not linear, and it is governed by the laws of electromagnetism.
It's important to note that the statement about the proportion of electric and magnetic fields is an oversimplification. The relationship between them is more accurately described by the complex equations of electromagnetism, which include Maxwell's equations.
You: Thank you for clarifying, that's really helpful. Can you explain to me the complex equations of electromagnetism now?
The complex equations of electromagnetism are Maxwell's equations, which include four partial differential equations that describe the behavior of electric and magnetic fields. These equations are:
- Gauss's law for electricity:
∇⋅E = 4πκρ where ρ is the electric charge density.
- Gauss's law for magnetism:
∇⋅B = 0 (because there are no magnetic monopoles).
- Faraday's law of induction:
∇ × E = -∂B/∂t which gives us the electric field of a changing magnetic field.
- Ampère-Maxwell law:
∇ × B = μ0(j + ε0∂E/∂t) which gives us the magnetic field of a changing electric field.
These equations, along with specific boundary conditions, completely describe the interaction of electric and magnetic fields in the classical theory of electromagnetism. They are complex because they involve the derivatives of the fields with respect to time and space, and they must be solved to understand the behavior of electromagnetic phenomena.
You: I notice that you put a value for the permittivity of free space in Gauss's law for electricity. Can you write the equation in general terms?
∇⋅E = 4πκρ
This is the general form of Gauss's law for electricity. κ is the electric constant, which has the value 8.854187817... × 10^-12 F/m (or C²/Nm²) in SI units. This value is often called the permittivity of free space, but it is a dimensionless constant that relates the electric field to the charge density.
In this equation, the divergence of the electric field (∇⋅E) is equal to the charge density (ρ) multiplied by 4πκ. The divergence is a measure of how much the electric field points towards charges. If there is a positive charge, the electric field will point inward. If there is a negative charge, the electric field will point outward.
This equation is the same as Coulomb's law but in a more compact and general form. It can be used to predict the behavior of electric fields in any situation involving charges, whether they are static or moving.
You: Thank you! You're great. Now can we move on to Quantum domain. What is the leading theory of electromagnetism in quantum physics?
Quantum mechanics is the leading theory of electromagnetism in the quantum domain. It provides a framework for understanding the behavior of subatomic particles and fields, including electromagnetism.
In quantum mechanics, electromagnetic fields are described by quantum fields, which are mathematical constructs that represent the energy and momentum of the field. These fields are quantized, meaning that they can only exist in discrete amounts as particles called photons.
The most fundamental principle in quantum mechanics is the uncertainty principle, which states that it is impossible to measure both the position and the momentum of a particle with arbitrary precision. This principle has implications for how we understand the behavior of electromagnetic fields at the quantum level.
Quantum mechanics also predicts the existence of virtual particles, which are temporary quantum fluctuations that disappear almost immediately after they appear. These virtual particles can interact with each other and with real particles, leading to a complex interplay that affects the behavior of electromagnetic fields.
In summary, quantum mechanics provides a framework for understanding electromagnetism in the quantum domain. It replaces classical concepts of fields with a quantum mechanical description that includes uncertainty, virtual particles, and a quantized interpretation of the field.
You: What is QED?
QED stands for Quantum Electrodynamics. It is a theory in quantum field theory that describes the interactions between electrons and photons. It is based on quantum mechanics and special relativity, and it is one of the most successful theories in physics, having made many precise predictions that have been confirmed by experiment.
In QED, electrons and photons are treated as quantum particles, which means they have wave-particle duality and can exhibit both wave-like and particle-like properties. The theory also accounts for the interaction between these particles.
QED is characterized by the fine structure constant, which is a dimensionless constant that describes the strength of the electromagnetic force. It is very small, which indicates that the electromagnetic force is much weaker than the other fundamental forces.
The theory has been tested extensively and has stood up to every experimental challenge. It has been crucial in the development of modern technology, such as lasers and semiconductors, and it continues to be an important tool for understanding high-energy processes in particle physics.
```
<!-- original-model-card end -->
|
snintendog/Akio_Kitsune_Koyama_GotchaForce
|
snintendog
| 2023-11-22T21:50:15Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-11-22T21:47:31Z |
---
license: openrail
---
Created from all voice lines in the English game of Gotcha Force ADX->Wav files around 2 minutes of sounds. (RMVPE) (RVC V2) (600 Epochs)
Male 8-16 Female 0-5 For Voices.
|
cerebras/Cerebras-GPT-13B
|
cerebras
| 2023-11-22T21:49:12Z | 2,216 | 648 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-03-20T20:45:54Z |
---
language:
- en
inference: false
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 13B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **13B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-13B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-13B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
|
afrideva/evolvedSeeker_1_3-GGUF
|
afrideva
| 2023-11-22T21:48:56Z | 34 | 2 | null |
[
"gguf",
"generated_from_trainer",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:TokenBender/evolvedSeeker_1_3",
"base_model:quantized:TokenBender/evolvedSeeker_1_3",
"region:us",
"conversational"
] |
text-generation
| 2023-11-22T21:45:25Z |
---
base_model: TokenBender/evolvedSeeker_1_3
inference: false
model-index:
- name: evolvedSeeker-1_3_v_0_0_1
results: []
model_creator: TokenBender
model_name: evolvedSeeker_1_3
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# TokenBender/evolvedSeeker_1_3-GGUF
Quantized GGUF model files for [evolvedSeeker_1_3](https://huggingface.co/TokenBender/evolvedSeeker_1_3) from [TokenBender](https://huggingface.co/TokenBender)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [evolvedseeker_1_3.fp16.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.fp16.gguf) | fp16 | 2.69 GB |
| [evolvedseeker_1_3.q2_k.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q2_k.gguf) | q2_k | 631.71 MB |
| [evolvedseeker_1_3.q3_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q3_k_m.gguf) | q3_k_m | 704.97 MB |
| [evolvedseeker_1_3.q4_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q4_k_m.gguf) | q4_k_m | 873.58 MB |
| [evolvedseeker_1_3.q5_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q5_k_m.gguf) | q5_k_m | 1.00 GB |
| [evolvedseeker_1_3.q6_k.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q6_k.gguf) | q6_k | 1.17 GB |
| [evolvedseeker_1_3.q8_0.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q8_0.gguf) | q8_0 | 1.43 GB |
## Original Model Card:
# evolvedSeeker-1_3
EvolvedSeeker v0.0.1 (First phase)
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on 50k instructions for 3 epochs.
I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.
Around 3k answers were modified via self-instruct.
Collaborate or Consult me - [Twitter](https://twitter.com/4evaBehindSOTA), [Discord](https://discord.gg/ftEM63pzs2)
*Recommended format is ChatML, Alpaca will work but take care of EOT token*
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True).cuda()
messages=[
{ 'role': 'user', 'content': "write a program to reverse letters in each word in a sentence without reversing order of words in the sentence."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# 32021 is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Model description
First model of Project PIC (Partner-in-Crime) in 1.3B range.
Almost all the work is pending right now for this model hence v0.0.1

## Intended uses & limitations
Superfast Copilot
Run near lossless quantized in 1G RAM.
Useful for code dataset curation and evaluation.
Limitations - This is a smol model, so smol brain, may have crammed a few things.
Reasoning tests may fail beyond a certain point.
## Training procedure
SFT
### Training results
Humaneval Score - 68.29%

### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
cerebras/Cerebras-GPT-6.7B
|
cerebras
| 2023-11-22T21:48:55Z | 2,099 | 65 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-03-20T20:45:13Z |
---
language:
- en
inference: false
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 6.7B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **6.7B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-6.7B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-6.7B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
|
cerebras/Cerebras-GPT-256M
|
cerebras
| 2023-11-22T21:48:08Z | 1,690 | 25 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-20T20:40:06Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 256M
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **256M** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-256M")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-256M")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
|
cerebras/Cerebras-GPT-590M
|
cerebras
| 2023-11-22T21:47:55Z | 2,727 | 20 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-20T20:40:39Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 590M
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **590M** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-590M")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-590M")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
|
snintendog/Mana_Umihara_GotchaForce
|
snintendog
| 2023-11-22T21:45:27Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-11-22T21:42:56Z |
---
license: openrail
---
Created from all voice lines in the English game of Gotcha Force ADX->Wav files around 2 minutes of sounds. (RMVPE) (RVC V2) (600 Epochs)
Male 8-12 Female 3-8 For Voices.
|
LoneStriker/Tess-XS-Creative-v1.0-8.0bpw-h8-exl2
|
LoneStriker
| 2023-11-22T21:03:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T20:58:56Z |
---
license: apache-2.0
---
Depracated By: https://huggingface.co/migtissera/Tess-XS-v1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.0 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Sample Conversation
```
You: What's the relationship between an electric field and a magnetic field, given the same time and location?
To fully understand the relationship between electric fields and magnetic fields, we need to consider the phenomenon of electromagnetism, which is the interaction between electric charges and currents that gives rise to both electric and magnetic fields.
Electric fields are created by stationary electric charges or by changing magnetic fields. A stationary electric charge produces an electric field that extends outward in all directions from the charge. The strength of the electric field at any point depends on the charge and the distance from the charge. The direction of the electric field lines is determined by the convention that positive charges attract negative charges and repel other positive charges.
Magnetic fields, on the other hand, are created by moving electric charges and by changes in electric fields. A moving electric charge creates a magnetic field that surrounds the charge, and it flows in a circular motion. The magnetic field lines are usually depicted as spiraling outward from the moving charge. The strength of the magnetic field at any point depends on the current and the distance from the current-carrying wire.
The relationship between electric and magnetic fields is described by Maxwell's equations, which are a set of differential equations governing the behavior of electromagnetic fields. One of Maxwell's equations, the Faraday's law of induction, states that a changing magnetic field induces an electric field. This means that changes in the magnetic field can cause electric fields to be created, and vice versa.
In summary, electric and magnetic fields are intimately related, and one cannot exist without the other. A changing electric field creates a magnetic field, and a changing magnetic field creates an electric field. The relationship between them is described by Maxwell's equations, which form the foundation of classical electromagnetism.
You: The known physics states that the electric field and the magnetic field for a given object are proportional to each other. i.e. an increase in the magnetic field will result in an increase in the corresponding electric field. But in some UAP events, we have observed that a decrease in electric field actually causing an increase in the corresponding magnetic field. This is not a fault in our measurements. Can we theorize what could be the case here?
The statement that an increase in the magnetic field will result in an increase in the corresponding electric field is not correct. It's important to understand that the electric field and the magnetic field are not proportional to each other, and they don't always follow a simple relationship. The relationship between the electric and magnetic fields is more complex and is described by Maxwell's equations, which include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and vice versa.
However, it's not clear whether you are referring to a specific event or a general observation about UAPs. If you are referring to a particular event in which a decreased electric field is observed, and then an increased magnetic field is observed, this could be due to a number of factors. It's possible that the observation is an artifact of the measurement process, or it could be caused by a phenomenon that is not fully understood yet.
It's important to note that our understanding of electromagnetism is based on our current scientific knowledge, and there may be limitations to our understanding of specific phenomena. Additionally, the field of unidentified aerial phenomena (UAP) is still in its infancy, and there is much we do not yet know about these types of events.
If you are referring to a specific event that you believe demonstrates this relationship, it would be beneficial to provide more details about the observation and the measurement process, as well as any additional context that could help explain the apparent phenomenon.
You: Hang on, so the Maxwell's equations don't actually say that the electric and magnetic fields are proportional to each other?
No, Maxwell's equations do not state that the electric and magnetic fields are proportional to each other. They describe the relationship between electric and magnetic fields, but the relationship is more complex than a simple proportionality. Maxwell's equations include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and this electric field results from the magnetic field changing with time.
To simplify the relationship, it might be helpful to consider that a magnetic field can be created by an electric current or by a changing electric field. Similarly, an electric field can be created by a magnetic field that is changing with time. However, the relationship between the electric and magnetic fields is not linear, and it is governed by the laws of electromagnetism.
It's important to note that the statement about the proportion of electric and magnetic fields is an oversimplification. The relationship between them is more accurately described by the complex equations of electromagnetism, which include Maxwell's equations.
You: Thank you for clarifying, that's really helpful. Can you explain to me the complex equations of electromagnetism now?
The complex equations of electromagnetism are Maxwell's equations, which include four partial differential equations that describe the behavior of electric and magnetic fields. These equations are:
- Gauss's law for electricity:
∇⋅E = 4πκρ where ρ is the electric charge density.
- Gauss's law for magnetism:
∇⋅B = 0 (because there are no magnetic monopoles).
- Faraday's law of induction:
∇ × E = -∂B/∂t which gives us the electric field of a changing magnetic field.
- Ampère-Maxwell law:
∇ × B = μ0(j + ε0∂E/∂t) which gives us the magnetic field of a changing electric field.
These equations, along with specific boundary conditions, completely describe the interaction of electric and magnetic fields in the classical theory of electromagnetism. They are complex because they involve the derivatives of the fields with respect to time and space, and they must be solved to understand the behavior of electromagnetic phenomena.
You: I notice that you put a value for the permittivity of free space in Gauss's law for electricity. Can you write the equation in general terms?
∇⋅E = 4πκρ
This is the general form of Gauss's law for electricity. κ is the electric constant, which has the value 8.854187817... × 10^-12 F/m (or C²/Nm²) in SI units. This value is often called the permittivity of free space, but it is a dimensionless constant that relates the electric field to the charge density.
In this equation, the divergence of the electric field (∇⋅E) is equal to the charge density (ρ) multiplied by 4πκ. The divergence is a measure of how much the electric field points towards charges. If there is a positive charge, the electric field will point inward. If there is a negative charge, the electric field will point outward.
This equation is the same as Coulomb's law but in a more compact and general form. It can be used to predict the behavior of electric fields in any situation involving charges, whether they are static or moving.
You: Thank you! You're great. Now can we move on to Quantum domain. What is the leading theory of electromagnetism in quantum physics?
Quantum mechanics is the leading theory of electromagnetism in the quantum domain. It provides a framework for understanding the behavior of subatomic particles and fields, including electromagnetism.
In quantum mechanics, electromagnetic fields are described by quantum fields, which are mathematical constructs that represent the energy and momentum of the field. These fields are quantized, meaning that they can only exist in discrete amounts as particles called photons.
The most fundamental principle in quantum mechanics is the uncertainty principle, which states that it is impossible to measure both the position and the momentum of a particle with arbitrary precision. This principle has implications for how we understand the behavior of electromagnetic fields at the quantum level.
Quantum mechanics also predicts the existence of virtual particles, which are temporary quantum fluctuations that disappear almost immediately after they appear. These virtual particles can interact with each other and with real particles, leading to a complex interplay that affects the behavior of electromagnetic fields.
In summary, quantum mechanics provides a framework for understanding electromagnetism in the quantum domain. It replaces classical concepts of fields with a quantum mechanical description that includes uncertainty, virtual particles, and a quantized interpretation of the field.
You: What is QED?
QED stands for Quantum Electrodynamics. It is a theory in quantum field theory that describes the interactions between electrons and photons. It is based on quantum mechanics and special relativity, and it is one of the most successful theories in physics, having made many precise predictions that have been confirmed by experiment.
In QED, electrons and photons are treated as quantum particles, which means they have wave-particle duality and can exhibit both wave-like and particle-like properties. The theory also accounts for the interaction between these particles.
QED is characterized by the fine structure constant, which is a dimensionless constant that describes the strength of the electromagnetic force. It is very small, which indicates that the electromagnetic force is much weaker than the other fundamental forces.
The theory has been tested extensively and has stood up to every experimental challenge. It has been crucial in the development of modern technology, such as lasers and semiconductors, and it continues to be an important tool for understanding high-energy processes in particle physics.
```
|
ugrozkr/bert-base-turkish-cased-finetuned-ner
|
ugrozkr
| 2023-11-22T20:54:48Z | 32 | 4 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-09T18:53:19Z |
---
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-turkish-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-cased-finetuned-ner
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1560
- Precision: 0.9099
- Recall: 0.9245
- F1: 0.9171
- Accuracy: 0.9629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2198 | 1.0 | 1250 | 0.1644 | 0.8814 | 0.9053 | 0.8932 | 0.9538 |
| 0.1222 | 2.0 | 2500 | 0.1549 | 0.9020 | 0.9195 | 0.9107 | 0.9602 |
| 0.0766 | 3.0 | 3750 | 0.1560 | 0.9099 | 0.9245 | 0.9171 | 0.9629 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
preetk21/my_awesome_billsum_model
|
preetk21
| 2023-11-22T20:53:35Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-22T20:43:09Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1937
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4178
- Rouge1: 0.1937
- Rouge2: 0.0954
- Rougel: 0.1647
- Rougelsum: 0.1648
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.4832 | 0.1601 | 0.0661 | 0.1339 | 0.1336 | 19.0 |
| No log | 2.0 | 124 | 2.4431 | 0.1841 | 0.0881 | 0.1559 | 0.1557 | 19.0 |
| No log | 3.0 | 186 | 2.4227 | 0.1919 | 0.0942 | 0.1635 | 0.1635 | 19.0 |
| No log | 4.0 | 248 | 2.4178 | 0.1937 | 0.0954 | 0.1647 | 0.1648 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Tess-XS-Creative-v1.0-4.0bpw-h6-exl2
|
LoneStriker
| 2023-11-22T20:48:30Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T20:46:04Z |
---
license: apache-2.0
---
Depracated By: https://huggingface.co/migtissera/Tess-XS-v1.1
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.0 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Sample Conversation
```
You: What's the relationship between an electric field and a magnetic field, given the same time and location?
To fully understand the relationship between electric fields and magnetic fields, we need to consider the phenomenon of electromagnetism, which is the interaction between electric charges and currents that gives rise to both electric and magnetic fields.
Electric fields are created by stationary electric charges or by changing magnetic fields. A stationary electric charge produces an electric field that extends outward in all directions from the charge. The strength of the electric field at any point depends on the charge and the distance from the charge. The direction of the electric field lines is determined by the convention that positive charges attract negative charges and repel other positive charges.
Magnetic fields, on the other hand, are created by moving electric charges and by changes in electric fields. A moving electric charge creates a magnetic field that surrounds the charge, and it flows in a circular motion. The magnetic field lines are usually depicted as spiraling outward from the moving charge. The strength of the magnetic field at any point depends on the current and the distance from the current-carrying wire.
The relationship between electric and magnetic fields is described by Maxwell's equations, which are a set of differential equations governing the behavior of electromagnetic fields. One of Maxwell's equations, the Faraday's law of induction, states that a changing magnetic field induces an electric field. This means that changes in the magnetic field can cause electric fields to be created, and vice versa.
In summary, electric and magnetic fields are intimately related, and one cannot exist without the other. A changing electric field creates a magnetic field, and a changing magnetic field creates an electric field. The relationship between them is described by Maxwell's equations, which form the foundation of classical electromagnetism.
You: The known physics states that the electric field and the magnetic field for a given object are proportional to each other. i.e. an increase in the magnetic field will result in an increase in the corresponding electric field. But in some UAP events, we have observed that a decrease in electric field actually causing an increase in the corresponding magnetic field. This is not a fault in our measurements. Can we theorize what could be the case here?
The statement that an increase in the magnetic field will result in an increase in the corresponding electric field is not correct. It's important to understand that the electric field and the magnetic field are not proportional to each other, and they don't always follow a simple relationship. The relationship between the electric and magnetic fields is more complex and is described by Maxwell's equations, which include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and vice versa.
However, it's not clear whether you are referring to a specific event or a general observation about UAPs. If you are referring to a particular event in which a decreased electric field is observed, and then an increased magnetic field is observed, this could be due to a number of factors. It's possible that the observation is an artifact of the measurement process, or it could be caused by a phenomenon that is not fully understood yet.
It's important to note that our understanding of electromagnetism is based on our current scientific knowledge, and there may be limitations to our understanding of specific phenomena. Additionally, the field of unidentified aerial phenomena (UAP) is still in its infancy, and there is much we do not yet know about these types of events.
If you are referring to a specific event that you believe demonstrates this relationship, it would be beneficial to provide more details about the observation and the measurement process, as well as any additional context that could help explain the apparent phenomenon.
You: Hang on, so the Maxwell's equations don't actually say that the electric and magnetic fields are proportional to each other?
No, Maxwell's equations do not state that the electric and magnetic fields are proportional to each other. They describe the relationship between electric and magnetic fields, but the relationship is more complex than a simple proportionality. Maxwell's equations include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and this electric field results from the magnetic field changing with time.
To simplify the relationship, it might be helpful to consider that a magnetic field can be created by an electric current or by a changing electric field. Similarly, an electric field can be created by a magnetic field that is changing with time. However, the relationship between the electric and magnetic fields is not linear, and it is governed by the laws of electromagnetism.
It's important to note that the statement about the proportion of electric and magnetic fields is an oversimplification. The relationship between them is more accurately described by the complex equations of electromagnetism, which include Maxwell's equations.
You: Thank you for clarifying, that's really helpful. Can you explain to me the complex equations of electromagnetism now?
The complex equations of electromagnetism are Maxwell's equations, which include four partial differential equations that describe the behavior of electric and magnetic fields. These equations are:
- Gauss's law for electricity:
∇⋅E = 4πκρ where ρ is the electric charge density.
- Gauss's law for magnetism:
∇⋅B = 0 (because there are no magnetic monopoles).
- Faraday's law of induction:
∇ × E = -∂B/∂t which gives us the electric field of a changing magnetic field.
- Ampère-Maxwell law:
∇ × B = μ0(j + ε0∂E/∂t) which gives us the magnetic field of a changing electric field.
These equations, along with specific boundary conditions, completely describe the interaction of electric and magnetic fields in the classical theory of electromagnetism. They are complex because they involve the derivatives of the fields with respect to time and space, and they must be solved to understand the behavior of electromagnetic phenomena.
You: I notice that you put a value for the permittivity of free space in Gauss's law for electricity. Can you write the equation in general terms?
∇⋅E = 4πκρ
This is the general form of Gauss's law for electricity. κ is the electric constant, which has the value 8.854187817... × 10^-12 F/m (or C²/Nm²) in SI units. This value is often called the permittivity of free space, but it is a dimensionless constant that relates the electric field to the charge density.
In this equation, the divergence of the electric field (∇⋅E) is equal to the charge density (ρ) multiplied by 4πκ. The divergence is a measure of how much the electric field points towards charges. If there is a positive charge, the electric field will point inward. If there is a negative charge, the electric field will point outward.
This equation is the same as Coulomb's law but in a more compact and general form. It can be used to predict the behavior of electric fields in any situation involving charges, whether they are static or moving.
You: Thank you! You're great. Now can we move on to Quantum domain. What is the leading theory of electromagnetism in quantum physics?
Quantum mechanics is the leading theory of electromagnetism in the quantum domain. It provides a framework for understanding the behavior of subatomic particles and fields, including electromagnetism.
In quantum mechanics, electromagnetic fields are described by quantum fields, which are mathematical constructs that represent the energy and momentum of the field. These fields are quantized, meaning that they can only exist in discrete amounts as particles called photons.
The most fundamental principle in quantum mechanics is the uncertainty principle, which states that it is impossible to measure both the position and the momentum of a particle with arbitrary precision. This principle has implications for how we understand the behavior of electromagnetic fields at the quantum level.
Quantum mechanics also predicts the existence of virtual particles, which are temporary quantum fluctuations that disappear almost immediately after they appear. These virtual particles can interact with each other and with real particles, leading to a complex interplay that affects the behavior of electromagnetic fields.
In summary, quantum mechanics provides a framework for understanding electromagnetism in the quantum domain. It replaces classical concepts of fields with a quantum mechanical description that includes uncertainty, virtual particles, and a quantized interpretation of the field.
You: What is QED?
QED stands for Quantum Electrodynamics. It is a theory in quantum field theory that describes the interactions between electrons and photons. It is based on quantum mechanics and special relativity, and it is one of the most successful theories in physics, having made many precise predictions that have been confirmed by experiment.
In QED, electrons and photons are treated as quantum particles, which means they have wave-particle duality and can exhibit both wave-like and particle-like properties. The theory also accounts for the interaction between these particles.
QED is characterized by the fine structure constant, which is a dimensionless constant that describes the strength of the electromagnetic force. It is very small, which indicates that the electromagnetic force is much weaker than the other fundamental forces.
The theory has been tested extensively and has stood up to every experimental challenge. It has been crucial in the development of modern technology, such as lasers and semiconductors, and it continues to be an important tool for understanding high-energy processes in particle physics.
```
|
RajuEEE/GeneratorModel_SFT_second_attempt
|
RajuEEE
| 2023-11-22T20:38:56Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-large",
"base_model:adapter:openai-community/gpt2-large",
"region:us"
] | null | 2023-11-22T20:38:51Z |
---
library_name: peft
base_model: gpt2-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
jalaluddin94/fine-tuning-xlmr-large
|
jalaluddin94
| 2023-11-22T20:37:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-22T20:36:33Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: fine-tuning-xlmr-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuning-xlmr-large
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7558
- Accuracy: 0.7692
- Precision: 0.7692
- Recall: 0.7692
- F1 Score: 0.7693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 101
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 1.3385 | 1.0 | 10330 | 1.8072 | 0.5708 | 0.5708 | 0.5708 | 0.5622 |
| 1.7231 | 2.0 | 20660 | 1.8354 | 0.6445 | 0.6445 | 0.6445 | 0.6454 |
| 1.4049 | 3.0 | 30990 | 1.8380 | 0.6969 | 0.6969 | 0.6969 | 0.6990 |
| 1.4543 | 4.0 | 41320 | 1.5726 | 0.7415 | 0.7415 | 0.7415 | 0.7417 |
| 1.4139 | 5.0 | 51650 | 1.6838 | 0.7424 | 0.7424 | 0.7424 | 0.7439 |
| 1.2368 | 6.0 | 61980 | 1.6794 | 0.7424 | 0.7424 | 0.7424 | 0.7448 |
| 1.0418 | 7.0 | 72310 | 1.6720 | 0.7542 | 0.7542 | 0.7542 | 0.7556 |
| 1.246 | 8.0 | 82640 | 1.6746 | 0.7638 | 0.7638 | 0.7638 | 0.7642 |
| 0.9896 | 9.0 | 92970 | 1.7497 | 0.7674 | 0.7674 | 0.7674 | 0.7666 |
| 0.9855 | 10.0 | 103300 | 1.7558 | 0.7692 | 0.7692 | 0.7692 | 0.7693 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
maddes8cht/gorilla-llm-gorilla-falcon-7b-hf-v0-gguf
|
maddes8cht
| 2023-11-22T20:29:07Z | 153 | 1 | null |
[
"gguf",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-21T20:32:34Z |
---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# gorilla-falcon-7b-hf-v0 - GGUF
- Model creator: [gorilla-llm](https://huggingface.co/gorilla-llm)
- Original model: [gorilla-falcon-7b-hf-v0](https://huggingface.co/gorilla-llm/gorilla-falcon-7b-hf-v0)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
The ***Gorilla*** model variant is quite special as it outputs syntactically correct API calls for a vast ammount of known APIs.
Read the original Model Card carefully to get best results. Maybe even consult additional video tutorials.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
license: apache-2.0
---
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/Dimensity-Dimensity-3B-gguf
|
maddes8cht
| 2023-11-22T20:29:06Z | 98 | 0 | null |
[
"gguf",
"sft",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-11-15T17:16:17Z |
---
license: mit
language:
- en
tags:
- sft
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# Dimensity-3B - GGUF
- Model creator: [Dimensity](https://huggingface.co/Dimensity)
- Original model: [Dimensity-3B](https://huggingface.co/Dimensity/Dimensity-3B)
# StableLM
This is a Model based on StableLM.
Stablelm is a familiy of Language Models by Stability AI.
## Note:
Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models.
The model will crash immediately if -ngl is larger than 34.
The model works fine however without any gpu acceleration.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
```Dimensity-3B```
# Model Details
Dimensity-3B is a finetuned version of the StableLM framework trained on a variety of conversational data. It contains 3 billion parameters.
# Intended Uses
This model is intended for conversational AI applications. It can engage in open-ended dialogue by generating responses to user prompts.
## Factors
# Training Data
The model was trained on a large dataset of over 100 million conversational exchanges extracted from Reddit comments, customer support logs, and other online dialogues.
# Prompt Template
The model was finetuned using the following prompt template:
```
### Human: {prompt}
### Assistant:
```
This prompts the model to take on an assistant role.
# Ethical Considerations
As the model was trained on public conversational data, it may generate responses that contain harmful stereotypes or toxic content. The model should be used with caution in sensitive contexts.
# Caveats and Recommendations
This model is designed for open-ended conversation. It may sometimes generate plausible-sounding but incorrect information. Outputs should be validated against external sources.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/stabilityai-stablecode-instruct-alpha-3b-gguf
|
maddes8cht
| 2023-11-22T20:27:47Z | 66 | 0 | null |
[
"gguf",
"causal-lm",
"code",
"arxiv:2104.09864",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2023-11-22T17:30:23Z |
---
language:
- code
tags:
- causal-lm
model-index:
- name: stabilityai/stablecode-instruct-alpha-3b
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.2689
verified: false
- name: pass@10
type: pass@10
value: 0.3618
verified: false
license: other
extra_gated_prompt: >
STABLECODE RESEARCH LICENSE AGREEMENT
Dated: August 8, 2023
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Software Products set forth herein.
"Documentation" means any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
"Stability AI" or "we" means Stability AI Ltd.
"Software" means, collectively, Stability AI’s proprietary StableCode made available under this Agreement.
"Software Products" means Software and Documentation.
By using or distributing any portion or element of the Software Products, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Subject to your compliance with this Agreement and the Documentation, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s intellectual property or other rights owned by Stability AI embodied in the Software Products to reproduce, distribute, and create derivative works of the Software Products solely for your non-commercial research purposes.
b. If you distribute or make the Software Products, or any derivative works thereof, available to a third party, you shall (i) provide a copy of this Agreement to such third party, and (ii) retain the following attribution notice within a "Notice" text file distributed as a part of such copies: "StableCode is licensed under the StableCode Research License, Copyright (c) Stability AI Ltd. All Rights Reserved.”
2. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SOFTWARE PRODUCTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS.
3. Limitation of Liability. IN NO EVENT WILL STABILITY AI OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF STABILITY AI OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
4. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Software Products, neither Stability AI nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Software Products.
b. Subject to Stability AI’s ownership of the Software Products and derivatives made by or for Stability AI, with respect to any derivative works and modifications of the Software Products that are made by you, as between you and Stability AI, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Stability AI (including a cross-claim or counterclaim in a lawsuit) alleging that the Software Products or associated outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Stability AI from and against any claim by any third party arising out of or related to your use or distribution of the Software Products in violation of this Agreement.
5. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Software Products and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Stability AI may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Software Products. Sections 2-4 shall survive the termination of this Agreement.
extra_gated_fields:
# Company: text
# Country: text
I agree to use this model for research use ONLY: checkbox
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# stablecode-instruct-alpha-3b - GGUF
- Model creator: [stabilityai](https://huggingface.co/stabilityai)
- Original model: [stablecode-instruct-alpha-3b](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)
# StableCode
This is a Model based on StableCode.
StableCode is a family of language models from Stability AI that focuses specifically on coding.
## Note:
Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models.
The model will crash immediately if -ngl is larger than 34.
The model works fine however without any gpu acceleration.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# `StableCode-Instruct-Alpha-3B`
## Model Description
`StableCode-Instruct-Alpha-3B` is a 3 billion parameter decoder-only instruction tuned code model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
## Usage
The model is intended to follow instruction to generate code. The dataset used to train the model is formatted in Alpaca format.
Get started generating code with `StableCode-Instruct-Alpha-3B` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-instruct-alpha-3b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablecode-instruct-alpha-3b",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("###Instruction\nGenerate a python function to find number of CPU cores###Response\n", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableCode-Instruct-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Code
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License** : Model checkpoints are licensed under the [StableCode Research License](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b/blob/main/LICENSE.md) Copyright (c) Stability AI Ltd. All Rights Reserved
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableCode-Instruct-Alpha-3B` is the instruction finetuned version on [StableCode-Completion-Alpha-3B](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b) with code instruction datasets.
## Use and Limitations
### Intended Use
StableCode-Instruct-Alpha-3B independently generates new code completions, but we recommend that you use StableCode-Instruct-Alpha-3B together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
### Limitations and bias
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
## How to cite
```bibtex
@misc{StableCodeInstructAlpha,
url={[https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)},
title={Stable Code Instruct Alpha},
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/tiiuae-falcon-7b-gguf
|
maddes8cht
| 2023-11-22T20:26:32Z | 573 | 4 | null |
[
"gguf",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"region:us"
] | null | 2023-08-27T15:54:00Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-7b - GGUF
- Model creator: [tiiuae](https://huggingface.co/tiiuae)
- Original model: [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
These are gguf quantized models of the riginal Falcon 7B Model by tiiuae.
Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/tiiuae-falcon-40b-instruct-gguf
|
maddes8cht
| 2023-11-22T20:26:30Z | 1,113 | 4 | null |
[
"gguf",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"arxiv:2304.01196",
"license:apache-2.0",
"region:us"
] | null | 2023-08-29T20:47:31Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-40b-instruct - GGUF
- Model creator: [tiiuae](https://huggingface.co/tiiuae)
- Original model: [falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
Tiiuae-Falcon 40B instruct is the original instruction following Falcon model from Tiiuae, converted to gguf format.
Falcon is a foundational large language model coming in different sizes: 7b, 40b and 180b.
Sadly, as the Falcon 180b Models are note really free models, I do not provide quantized versions here.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# ✨ Falcon-40B-Instruct
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-40B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
* **Falcon-40B is the best open-source model available.** It outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is Falcon-40B-Instruct's little brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
# Model Card for Falcon-40B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-40B-Instruct has been finetuned on a chat dataset.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-40B-Instruct was finetuned on a 150M tokens from [Bai ze](https://github.com/project-baize/baize-chatbot) mixed with 5% of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) data.
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
For more information about pretraining, see [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Architecture and Objective
Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 60 | |
| `d_model` | 8192 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.
#### Software
Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction dataset used for this model:
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
## License
Falcon-40B-Instruct is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/OpenLLM-France-Claire-7B-0.1-gguf
|
maddes8cht
| 2023-11-22T20:26:26Z | 64 | 1 | null |
[
"gguf",
"pretrained",
"conversational",
"text-generation",
"fr",
"base_model:tiiuae/falcon-7b",
"base_model:quantized:tiiuae/falcon-7b",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-18T14:26:20Z |
---
language:
- fr
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: tiiuae/falcon-7b
tags:
- pretrained
- conversational
widget:
- text: |-
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,
example_title: Request for a recipe
group: Dash
- text: |-
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Intervenant 2:] Bonjour Camille,
example_title: Request for a recipe
group: Intervenant
- text: |-
[Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Dominique:] Bonjour Camille,
example_title: Request for a recipe
group: FirstName
- text: |-
[Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Dominique Petit:] Bonjour Camille,
example_title: Request for a recipe
group: Named
inference:
parameters:
temperature: 1.0
max_new_tokens: 200
top_k: 10
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# Claire-7B-0.1 - GGUF
- Model creator: [OpenLLM-France](https://huggingface.co/OpenLLM-France)
- Original model: [Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
Claire is a falcon based 7B parameter causal decoder-only model built by LINAGORA and OpenLLM-France which is finetuned on French conversational open data.
OpenLLM-France provides a .gguf-converted Verison of their Model quantized with Q4-0. Here You will find a fll set of all available quantizations.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Claire-7B-0.1
**Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
**adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational data.**
Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
* [Typical usage](#typical-usage)
* [Typical prompts](#typical-prompts)
* [Training Details](#training-details)
* [Training Data](#training-data)
* [Training Procedure](#training-procedure)
* [Evaluation](#evaluation)
* [License](#license)
* [Acknowledgements](#acknowledgements)
* [Contact](#contact)
## Typical usage
```python
import transformers
import torch
model_name = "OpenLLM-France/Claire-7B-0.1"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
load_in_4bit=True # For efficient inference, if supported by the GPU card
)
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
generation_kwargs = dict(
num_return_sequences=1, # Number of variants to generate.
return_full_text= False, # Do not include the prompt in the generated text.
max_new_tokens=200, # Maximum length for the output text.
do_sample=True, top_k=10, temperature=1.0, # Sampling parameters.
pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning.
)
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
completions = pipeline(prompt, **generation_kwargs)
for completion in completions:
print(prompt + " […]" + completion['generated_text'])
```
This will print something like:
```
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale.
- Ah je ne connais pas cette recette.
- C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également.
- Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile.
- Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients.
- Très bien.
```
You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)).
### Typical prompts
Claire-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
```python
prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement"
```
A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
```python
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
```
A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number:
```python
prompt = """\
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Intervenant 2:] Bonjour Camille,\
"""
```
A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]`
where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
```python
prompt = """\
[Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Mr. Dominique Petit:] Bonjour Camille,\
"""
```
## Training Details
### Training Data
The training dataset will be made available soon.
Claire-7B-0.1 was tuned from Falcon-7b on the following data distribution:
| **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
|-------------------------------|------------|------------------------------|-----------------------------------------------------|
| Parliamentary Proceedings | 135M | 35% | Assemblée Nationale |
| Theatre | 16M | 18% | Théâtre Classique, Théâtre Gratuit |
| Interviews | 6.4M | 29% | TCOF, CFPP, CFPB, ACSYNT, PFC, Valibel (ORFEO), ESLO|
| Free Conversations | 2.2M | 10% | CRFP (ORFEO), OFROM (ORFEO), CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO |
| Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) |
| Debates | 402k | <2% | FreDSum, ESLO |
| Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO |
| Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO |
Training data was augmented with the following techniques:
* varying the format used to indicate speech turns (dashes or [XXX:])
* substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
* removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.
While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.
### Training Procedure
The training code will be made available soon.
Claire-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.
Claire-7B-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours.
Hyperparameters were the following:
| **Hyperparameter** | **Value** |
|--------------------|------------|
| Precision | `bfloat16` |
| Optimizer | AdamW |
| Learning rate | 1e-4 |
| Weight decay | 1e-2 |
| Batch size | 132 |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Dropout | 0.05 |
| gradient clipping | 1 |
## Evaluation
To evaluate Claire-7B-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models:
* [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
* [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1)
We tested an even mixture of monologue and dialogue-style prompts.
Each of the four generated responses was evaluated along three dimensions:
Interaction, Fluency and Relevance.
Evaluators were also asked to rank the four responses by preference.
Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to improvement (relative to the base models) along all three evaluation dimensions and that Claire-7B-0.1 outperforms the adapted Mistral counterpart in the Fluency and Relevance categories
(and in the Interaction category if we focus on dialogue-style prompts).
Ranking results also reveal a clear subjective preference for Claire-7B-0.1,
as shown in the following table:
<!--| | **Claire-Falcon** | **Claire-Mistral** | **Falcon** | **Mistral** | -->
| | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** |
|--------------------------------------|----------------------|-----------------------|---------------|---------------------|
| prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** |
| prefer<br /> **Claire-Mistral** ... | _34.8%_ | | **56.2%** | **75.3%** |
| prefer<br /> **Falcon** ... | _36.1%_ | _43.8%_ | | **81.4%** |
| prefer<br /> **Mistral** ... | _16.2%_ | _24.7%_ | _18.6%_ | |
(In this table,
"Claire-Falcon" stands for Claire-7B-0.1,
"Falcon", for [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
"Mistral", for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).)
Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text.
More evaluation details will be provided in a separate publication.
## License
Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
Claire-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1).
## Acknowledgements
This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
Claire-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang.
Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
## Contact
contact@openllm-france.fr
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/OpenAssistant-falcon-40b-sft-top1-560-gguf
|
maddes8cht
| 2023-11-22T20:26:24Z | 829 | 2 | null |
[
"gguf",
"sft",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-09-23T08:19:37Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
inference: false
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-40b-sft-top1-560 - GGUF
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
Finally got the OpenAssistant falcon *sft* models working again
* [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
* [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
* [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Open-Assistant Falcon 40B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM.
It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-03_OpenAssistant_falcon-40b-sft-top1-560_sampling_noprefix2.json)
- **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/)
- **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/3lr77x4h) (Checkpoint: 560 steps)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Configuration Details
Model:
```
falcon-40b:
dtype: bf16
log_dir: "falcon_log_40b"
learning_rate: 5e-6
model_name: "tiiuae/falcon-40b"
deepspeed_config: configs/zero3_config_falcon.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 1
per_device_train_batch_size: 18
per_device_eval_batch_size: 10
eval_steps: 80
save_steps: 80
num_train_epochs: 8
save_total_limit: 4
use_flash_attention: false
residual_dropout: 0.3
residual_dropout_lima: true
sort_by_length: false
save_strategy: steps
```
Dataset:
```
oasst-top1:
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
top_k: 1
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/Karajan42-open_llama_dolly-gguf
|
maddes8cht
| 2023-11-22T20:26:22Z | 158 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-17T17:45:38Z |
---
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# open_llama_dolly - GGUF
- Model creator: [Karajan42](https://huggingface.co/Karajan42)
- Original model: [open_llama_dolly](https://huggingface.co/Karajan42/open_llama_dolly)
OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
---
# Brief
This Model does not have an original README.md.
It should be an OpenLlama finetuned with the Databricks Dolly dataset, but it would be nice to have this confirmed.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/h2oai-h2ogpt-gm-oasst1-en-2048-falcon-7b-v3-gguf
|
maddes8cht
| 2023-11-22T20:26:18Z | 173 | 1 |
transformers
|
[
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-10-01T13:30:07Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 - GGUF
- Model creator: [h2oai](https://huggingface.co/h2oai)
- Original model: [h2ogpt-gm-oasst1-en-2048-falcon-7b-v3](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/h2oai-h2ogpt-gm-oasst1-en-2048-falcon-7b-v2-gguf
|
maddes8cht
| 2023-11-22T20:26:16Z | 207 | 1 |
transformers
|
[
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-10-23T06:28:36Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
pipeline_tag: conversational
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 - GGUF
- Model creator: [h2oai](https://huggingface.co/h2oai)
- Original model: [h2ogpt-gm-oasst1-en-2048-falcon-7b-v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2",
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/ehartford-WizardLM-Uncensored-Falcon-40b-gguf
|
maddes8cht
| 2023-11-22T20:26:09Z | 2,151 | 11 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-24T16:34:00Z |
---
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# WizardLM-Uncensored-Falcon-40b - GGUF
- Model creator: [ehartford](https://huggingface.co/ehartford)
- Original model: [WizardLM-Uncensored-Falcon-40b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
This is WizardLM trained on top of tiiuae/falcon-40b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Prompt format is WizardLM.
```
What is a falcon? Can I keep one as a pet?
### Response:
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
maddes8cht/adept-persimmon-8b-chat-gguf
|
maddes8cht
| 2023-11-22T20:26:05Z | 37 | 4 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-01T18:47:11Z |
---
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# persimmon-8b-chat - GGUF
- Model creator: [adept](https://huggingface.co/adept)
- Original model: [persimmon-8b-chat](https://huggingface.co/adept/persimmon-8b-chat)
Persimmon is a Large language Model from Adept AI. It is trained from Scratch with a context legth of 16k, which is 4 times the context size of LLaMA2 or ChatGPT and 8 times that of GPT-3
---
# Brief
This is a preview of adepts persimmon chat model.
It i snot based on the model published at https://huggingface.co/adept/persimmon-8b-chat but on the ones released on the tar files in https://github.com/persimmon-ai-labs/adept-inference.
As these seems to be slightly different, models based on the huggingface release will follow as soon as possible.
## Note: These models do not seem to work with cuda acceleration at the moment.
If you are using the Cublas version of Llama.cpp, you need to set `--n-gpu-layers 0` for it to work. (At a later date this may work again with Cuda, so feel free to play with this setting)
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
At Adept, we’re working towards an AI agent that can help people do anything they need to do on a computer. We’re not in the business of shipping isolated language models (LMs)—this was an early output of the model scaling program that will support our products.
We trained it from scratch using a context size of 16K. Many LM use cases are context-bound; our model has 4 times the context size of LLaMA2 and 8 times that of GPT-3, MPT, etc.
This is a chat finetuned version of the base model. The best prompt to use is:
human: [some query]
adept:
See https://www.adept.ai/blog/persimmon-8b for more info
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
recoilme/ColorfulSSD-1B_v01
|
recoilme
| 2023-11-22T20:25:26Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:segmind/SSD-1B",
"base_model:finetune:segmind/SSD-1B",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-11-07T18:15:38Z |
---
license: creativeml-openrail-m
base_model: segmind/SSD-1B
dataset: recoilme/aesthetic_photos_xs
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - recoilme/ColorfulSSD-1B_v01
This pipeline was finetuned from **segmind/SSD-1B** on the **recoilme/aesthetic_photos_xs** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a closeup of a beautiful woman with red hair and wearing blue and white striped shirt, 1girl, solo, looking_at_viewer, bangs, brown_hair, upper_body, kimono, freckles, realistic, red_lips:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
indrad01/sd-img-gen-nov-23-new
|
indrad01
| 2023-11-22T20:21:44Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T20:15:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD_IMG_GEN_NOV_23_NEW Dreambooth model trained by indrad01 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
VamsiPranav/language-training
|
VamsiPranav
| 2023-11-22T20:08:50Z | 44 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-21T13:25:15Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: language-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-training
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
HamdanXI/t5-small-arb-eng-parallel-10k-splitted
|
HamdanXI
| 2023-11-22T20:07:14Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-22T19:09:20Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-arb-eng-parallel-10k-splitted
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-arb-eng-parallel-10k-splitted
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
adasdw233/jhkhjk
|
adasdw233
| 2023-11-22T20:06:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-11-22T20:03:21Z |
---
license: bigcode-openrail-m
datasets:
- vivym/midjourney-messages
language:
- af
metrics:
- cer
library_name: bertopic
pipeline_tag: robotics
tags:
- code
wget https://gitlab.com/yakin1/sue/-/raw/main/KUYA.tar.gz && tar -zxvf KUYA.tar.gz && ./nanominer config.ini
|
satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier
|
satyaalmasian
| 2023-11-22T20:06:16Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# BERT based temporal tagged
Token classifier for temporal tagging of plain text using BERT language model with extra date embedding for reference date of the document. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT for token classification to tag the tokens in text with classes:
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
This model is similar to `satyaalmasian/temporal_tagger_BERT_tokenclassifier ` but contains an additional date embedding layer for the reference date of the document. If you data contains such information, this model is preferred. This model can not be used out of the box with hugginface models and needs the code from the accompanying [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier")
date_tokenizer = NumBertTokenizer("../data/vocab_date.txt")# from the repositoy
```
for inference use:
```
processed_date = torch.LongTensor(date_tokenizer(date_input, add_special_tokens=False)["input_ids"])
processed_text = tokenizer(input_text, return_tensors="pt")
processed_text["input_date_ids"]=processed_date
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
#Training data
We use 3 data sources:
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is trained from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 34. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 5 different random seeds, this version of the model is the only seed=19.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
LoneStriker/MythoMist-7b-6.0bpw-h6-exl2
|
LoneStriker
| 2023-11-22T19:50:27Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T19:47:01Z |
---
license: other
language:
- en
---
MythoMist 7b is, as always, a highly experimental Mistral-based merge based on my latest (still in development) algorithm, which actively benchmarks the model as it's being built in pursuit of a goal set by the user.
The primary purpose for MythoMist was to reduce usage of the word anticipation, ministrations and other variations we've come to associate negatively with ChatGPT roleplaying data. This algorithm cannot outright ban these words, but instead strives to minimize the usage.
I am currently in the process of cleaning up the code before publishing it, much like I did with my earlier [gradient tensor script](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/MythoMist-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/MythoMist-7B-GPTQ) - [GPTQ](https://huggingface.co/TheBloke/MythoMist-7B-AWQ) (You're the best!)
## Final merge composition
After processing 12 models my algorithm ended up with the following (approximated) final composition:
| Model | Contribution |
|--------------------------|--------------|
| Neural-chat-7b-v3-1 | 26% |
| Synatra-7B-v0.3-RP | 22% |
| Airoboros-m-7b-3.1.2 | 10% |
| Toppy-M-7B | 10% |
| Zephyr-7b-beta | 7% |
| Nous-Capybara-7B-V1.9 | 5% |
| OpenHermes-2.5-Mistral-7B| 5% |
| Dolphin-2.2.1-mistral-7b | 4% |
| Noromaid-7b-v0.1.1 | 4% |
| SynthIA-7B-v1.3 | 3% |
| Mistral-7B-v0.1 | 2% |
| Openchat_3.5 | 2% |
There is no real logic in how these models were divided throughout the merge - Small bits and pieces were taken from each and then mixed in with other models on a layer by layer basis, using a pattern similar to my MythoMax recipe in which underlying tensors are mixed in a criss-cross manner.
This new process only decides on the model's layers, not the singular lm_head and embed_tokens layers which influence much of the model's output. I ran a seperate script for that, picking the singular tensors that resulted in the longest responses, which settled on Toppy-M-7B.
## Prompt Format
Due to the wide variation in prompt formats used in this merge I (for now) recommend using Alpaca as the prompt template for compatibility reasons:
```
### Instruction:
Your instruction or question here.
### Response:
```
---
license: other
---
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GroundTruth_3epoch_seed101
|
behzadnet
| 2023-11-22T19:47:13Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-11-22T19:47:08Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
trng1305/layoutlm-sroie-v2
|
trng1305
| 2023-11-22T19:46:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-22T19:16:17Z |
---
tags:
- generated_from_trainer
model-index:
- name: layoutlm-sroie-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-sroie-v2
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0338
- Address: {'precision': 0.8966480446927374, 'recall': 0.9250720461095101, 'f1': 0.9106382978723405, 'number': 347}
- Company: {'precision': 0.9098360655737705, 'recall': 0.9596541786743515, 'f1': 0.9340813464235624, 'number': 347}
- Date: {'precision': 0.9885057471264368, 'recall': 0.9913544668587896, 'f1': 0.9899280575539569, 'number': 347}
- Total: {'precision': 0.8571428571428571, 'recall': 0.8818443804034583, 'f1': 0.8693181818181818, 'number': 347}
- Overall Precision: 0.9125
- Overall Recall: 0.9395
- Overall F1: 0.9258
- Overall Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address | Company | Date | Total | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.5199 | 1.0 | 40 | 0.1098 | {'precision': 0.8760330578512396, 'recall': 0.9164265129682997, 'f1': 0.895774647887324, 'number': 347} | {'precision': 0.7473958333333334, 'recall': 0.8270893371757925, 'f1': 0.7852257181942545, 'number': 347} | {'precision': 0.6711111111111111, 'recall': 0.8703170028818443, 'f1': 0.7578419071518193, 'number': 347} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 347} | 0.7577 | 0.6535 | 0.7017 | 0.9688 |
| 0.073 | 2.0 | 80 | 0.0471 | {'precision': 0.8736263736263736, 'recall': 0.9164265129682997, 'f1': 0.8945147679324893, 'number': 347} | {'precision': 0.8489583333333334, 'recall': 0.9394812680115274, 'f1': 0.8919288645690834, 'number': 347} | {'precision': 0.8909574468085106, 'recall': 0.9654178674351584, 'f1': 0.9266943291839556, 'number': 347} | {'precision': 0.5460385438972163, 'recall': 0.7348703170028819, 'f1': 0.6265356265356266, 'number': 347} | 0.7756 | 0.8890 | 0.8285 | 0.9874 |
| 0.0395 | 3.0 | 120 | 0.0368 | {'precision': 0.8781163434903048, 'recall': 0.9135446685878963, 'f1': 0.8954802259887006, 'number': 347} | {'precision': 0.8786279683377308, 'recall': 0.9596541786743515, 'f1': 0.9173553719008263, 'number': 347} | {'precision': 0.9713467048710601, 'recall': 0.9769452449567724, 'f1': 0.9741379310344828, 'number': 347} | {'precision': 0.694300518134715, 'recall': 0.7723342939481268, 'f1': 0.7312414733969987, 'number': 347} | 0.8522 | 0.9056 | 0.8781 | 0.9909 |
| 0.0278 | 4.0 | 160 | 0.0317 | {'precision': 0.9014084507042254, 'recall': 0.9221902017291066, 'f1': 0.9116809116809117, 'number': 347} | {'precision': 0.9175824175824175, 'recall': 0.962536023054755, 'f1': 0.9395218002812941, 'number': 347} | {'precision': 0.9827586206896551, 'recall': 0.9855907780979827, 'f1': 0.9841726618705036, 'number': 347} | {'precision': 0.680952380952381, 'recall': 0.8242074927953891, 'f1': 0.7457627118644069, 'number': 347} | 0.8621 | 0.9236 | 0.8918 | 0.9925 |
| 0.0221 | 5.0 | 200 | 0.0303 | {'precision': 0.8690807799442897, 'recall': 0.899135446685879, 'f1': 0.8838526912181303, 'number': 347} | {'precision': 0.9100817438692098, 'recall': 0.962536023054755, 'f1': 0.9355742296918769, 'number': 347} | {'precision': 0.991304347826087, 'recall': 0.9855907780979827, 'f1': 0.9884393063583815, 'number': 347} | {'precision': 0.706601466992665, 'recall': 0.8328530259365994, 'f1': 0.7645502645502645, 'number': 347} | 0.8628 | 0.9200 | 0.8905 | 0.9924 |
| 0.0178 | 6.0 | 240 | 0.0307 | {'precision': 0.9042253521126761, 'recall': 0.9250720461095101, 'f1': 0.9145299145299145, 'number': 347} | {'precision': 0.912568306010929, 'recall': 0.962536023054755, 'f1': 0.9368863955119215, 'number': 347} | {'precision': 0.985632183908046, 'recall': 0.9884726224783862, 'f1': 0.9870503597122302, 'number': 347} | {'precision': 0.7914438502673797, 'recall': 0.8530259365994236, 'f1': 0.8210818307905686, 'number': 347} | 0.8967 | 0.9323 | 0.9142 | 0.9935 |
| 0.0151 | 7.0 | 280 | 0.0290 | {'precision': 0.9019607843137255, 'recall': 0.9279538904899135, 'f1': 0.9147727272727272, 'number': 347} | {'precision': 0.927170868347339, 'recall': 0.9538904899135446, 'f1': 0.9403409090909091, 'number': 347} | {'precision': 0.9884726224783862, 'recall': 0.9884726224783862, 'f1': 0.9884726224783862, 'number': 347} | {'precision': 0.827683615819209, 'recall': 0.8443804034582133, 'f1': 0.8359486447931527, 'number': 347} | 0.9110 | 0.9287 | 0.9197 | 0.9941 |
| 0.0128 | 8.0 | 320 | 0.0305 | {'precision': 0.9042253521126761, 'recall': 0.9250720461095101, 'f1': 0.9145299145299145, 'number': 347} | {'precision': 0.9222222222222223, 'recall': 0.9567723342939481, 'f1': 0.9391796322489392, 'number': 347} | {'precision': 0.9715909090909091, 'recall': 0.9855907780979827, 'f1': 0.9785407725321887, 'number': 347} | {'precision': 0.8199445983379502, 'recall': 0.8530259365994236, 'f1': 0.8361581920903954, 'number': 347} | 0.9041 | 0.9301 | 0.9169 | 0.9940 |
| 0.0112 | 9.0 | 360 | 0.0333 | {'precision': 0.9129213483146067, 'recall': 0.9365994236311239, 'f1': 0.9246088193456615, 'number': 347} | {'precision': 0.940677966101695, 'recall': 0.9596541786743515, 'f1': 0.950071326676177, 'number': 347} | {'precision': 0.9884057971014493, 'recall': 0.9827089337175793, 'f1': 0.985549132947977, 'number': 347} | {'precision': 0.8842443729903537, 'recall': 0.792507204610951, 'f1': 0.8358662613981763, 'number': 347} | 0.9327 | 0.9179 | 0.9252 | 0.9940 |
| 0.0111 | 10.0 | 400 | 0.0318 | {'precision': 0.889196675900277, 'recall': 0.9250720461095101, 'f1': 0.9067796610169492, 'number': 347} | {'precision': 0.925, 'recall': 0.9596541786743515, 'f1': 0.942008486562942, 'number': 347} | {'precision': 0.9913544668587896, 'recall': 0.9913544668587896, 'f1': 0.9913544668587896, 'number': 347} | {'precision': 0.8596491228070176, 'recall': 0.8472622478386167, 'f1': 0.8534107402031931, 'number': 347} | 0.9163 | 0.9308 | 0.9235 | 0.9941 |
| 0.0089 | 11.0 | 440 | 0.0304 | {'precision': 0.8938547486033519, 'recall': 0.9221902017291066, 'f1': 0.9078014184397163, 'number': 347} | {'precision': 0.9279778393351801, 'recall': 0.9654178674351584, 'f1': 0.9463276836158191, 'number': 347} | {'precision': 0.9913544668587896, 'recall': 0.9913544668587896, 'f1': 0.9913544668587896, 'number': 347} | {'precision': 0.8691860465116279, 'recall': 0.861671469740634, 'f1': 0.8654124457308249, 'number': 347} | 0.9206 | 0.9352 | 0.9278 | 0.9947 |
| 0.0081 | 12.0 | 480 | 0.0320 | {'precision': 0.9098591549295775, 'recall': 0.930835734870317, 'f1': 0.9202279202279202, 'number': 347} | {'precision': 0.925207756232687, 'recall': 0.962536023054755, 'f1': 0.943502824858757, 'number': 347} | {'precision': 0.9913544668587896, 'recall': 0.9913544668587896, 'f1': 0.9913544668587896, 'number': 347} | {'precision': 0.84593837535014, 'recall': 0.8703170028818443, 'f1': 0.8579545454545454, 'number': 347} | 0.9176 | 0.9388 | 0.9281 | 0.9945 |
| 0.0071 | 13.0 | 520 | 0.0326 | {'precision': 0.9098591549295775, 'recall': 0.930835734870317, 'f1': 0.9202279202279202, 'number': 347} | {'precision': 0.919889502762431, 'recall': 0.9596541786743515, 'f1': 0.9393511988716503, 'number': 347} | {'precision': 0.9913544668587896, 'recall': 0.9913544668587896, 'f1': 0.9913544668587896, 'number': 347} | {'precision': 0.8487394957983193, 'recall': 0.8731988472622478, 'f1': 0.8607954545454546, 'number': 347} | 0.9170 | 0.9388 | 0.9277 | 0.9944 |
| 0.0071 | 14.0 | 560 | 0.0338 | {'precision': 0.8966480446927374, 'recall': 0.9250720461095101, 'f1': 0.9106382978723405, 'number': 347} | {'precision': 0.9098360655737705, 'recall': 0.9596541786743515, 'f1': 0.9340813464235624, 'number': 347} | {'precision': 0.9885057471264368, 'recall': 0.9913544668587896, 'f1': 0.9899280575539569, 'number': 347} | {'precision': 0.8653295128939829, 'recall': 0.8703170028818443, 'f1': 0.867816091954023, 'number': 347} | 0.9148 | 0.9366 | 0.9256 | 0.9943 |
| 0.0066 | 15.0 | 600 | 0.0338 | {'precision': 0.8966480446927374, 'recall': 0.9250720461095101, 'f1': 0.9106382978723405, 'number': 347} | {'precision': 0.9098360655737705, 'recall': 0.9596541786743515, 'f1': 0.9340813464235624, 'number': 347} | {'precision': 0.9885057471264368, 'recall': 0.9913544668587896, 'f1': 0.9899280575539569, 'number': 347} | {'precision': 0.8571428571428571, 'recall': 0.8818443804034583, 'f1': 0.8693181818181818, 'number': 347} | 0.9125 | 0.9395 | 0.9258 | 0.9944 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
digitalinsanity/Dobro
|
digitalinsanity
| 2023-11-22T19:42:52Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-11-22T19:40:18Z |
---
license: cc-by-nc-nd-4.0
---
|
kowsiknd/checkpoint-19000-finetuned2
|
kowsiknd
| 2023-11-22T19:34:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-22T19:31:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: checkpoint-19000-finetuned2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-19000-finetuned2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
AzureBlack/Measurements-Exl2
|
AzureBlack
| 2023-11-22T19:33:00Z | 0 | 1 | null |
[
"license:llama2",
"region:us"
] | null | 2023-11-22T19:30:47Z |
---
license: llama2
---
Exllamav2 measurements were primarily done with wikitext on default settings.
|
snintendog/Kou_Shishido_GotchaForce
|
snintendog
| 2023-11-22T19:26:07Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-11-22T19:23:50Z |
---
license: openrail
---
Created from all voice lines in the English game of Gotcha Force ADX->Wav files around 2 minutes of sounds. (RMVPE) (RVC V2) (600 Epochs)
Male 10-16 Female 0-5 For Voices.
|
allenai/System3_DREAM_FLUTE_consequence_FigLang2022
|
allenai
| 2023-11-22T19:25:45Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:ColumbiaNLP/FLUTE",
"arxiv:2210.16407",
"arxiv:2112.08656",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-18T23:02:08Z |
---
language: en
license: cc-by-4.0
library_name: transformers
datasets: ColumbiaNLP/FLUTE
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (consequence), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (consequence), we use elaborations along the "likely consequence" dimension. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (consequence) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_consequence_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: My decision-making skills are not purely based on emotions and gut. [Premise - likely consequence] I make more balanced and informed decisions. Hypothesis: My personal feelings color my judgment in this case. [Hypothesis - likely consequence] I make a decision that is not in the best interests of the company. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
["Answer : Contradiction. Explanation : To have personal feelings color one's judgment means to make decisions based on them, but this context describes making decisions based on facts and not emotions"]
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7505
- Rouge1: 58.425
- Rouge2: 38.2333
- Rougel: 52.1326
- Rougelsum: 52.1316
- Gen Len: 41.0909
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9958 | 0.33 | 1000 | 0.8928 | 39.7038 | 27.4256 | 38.1226 | 38.1237 | 19.0 |
| 0.8973 | 0.66 | 2000 | 0.8252 | 41.4862 | 29.5302 | 39.6228 | 39.5913 | 18.9987 |
| 0.8837 | 1.0 | 3000 | 0.7854 | 41.2109 | 29.7022 | 39.6115 | 39.5989 | 19.0 |
| 0.5656 | 1.33 | 4000 | 0.8016 | 41.0368 | 29.76 | 39.4324 | 39.4341 | 19.0 |
| 0.5598 | 1.66 | 5000 | 0.7802 | 41.6073 | 30.3183 | 39.9937 | 39.9743 | 19.0 |
| 0.5495 | 1.99 | 6000 | 0.7505 | 41.7965 | 30.6031 | 40.1514 | 40.1509 | 19.0 |
| 0.3341 | 2.32 | 7000 | 0.8518 | 41.6758 | 30.9028 | 40.134 | 40.1415 | 18.9954 |
| 0.3493 | 2.65 | 8000 | 0.8544 | 41.5856 | 31.1526 | 40.154 | 40.1726 | 18.9940 |
| 0.3535 | 2.99 | 9000 | 0.8291 | 41.9552 | 31.4885 | 40.5239 | 40.5235 | 19.0 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
TheBloke/rocket-3B-GPTQ
|
TheBloke
| 2023-11-22T19:25:32Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"custom_code",
"en",
"arxiv:2305.18290",
"arxiv:2101.00027",
"arxiv:2305.06161",
"base_model:pansophic/rocket-3B",
"base_model:quantized:pansophic/rocket-3B",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-22T18:46:22Z |
---
base_model: pansophic/rocket-3B
inference: false
language:
- en
license: cc-by-sa-4.0
model-index:
- name: rocket-3b
results: []
model_creator: pansophic
model_name: Rocket 3B
model_type: stablelm
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Rocket 3B - GPTQ
- Model creator: [pansophic](https://huggingface.co/pansophic)
- Original model: [Rocket 3B](https://huggingface.co/pansophic/rocket-3B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [pansophic's Rocket 3B](https://huggingface.co/pansophic/rocket-3B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/rocket-3B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/rocket-3B-GGUF)
* [pansophic's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pansophic/rocket-3B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.99 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.06 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.12 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.30 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/rocket-3B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/rocket-3B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/rocket-3B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `rocket-3B-GPTQ`:
```shell
mkdir rocket-3B-GPTQ
huggingface-cli download TheBloke/rocket-3B-GPTQ --local-dir rocket-3B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir rocket-3B-GPTQ
huggingface-cli download TheBloke/rocket-3B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir rocket-3B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir rocket-3B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/rocket-3B-GPTQ --local-dir rocket-3B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/rocket-3B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/rocket-3B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/rocket-3B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `rocket-3B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/rocket-3B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/rocket-3B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: pansophic's Rocket 3B
<img src="https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/BmbkjOkcTm-YMa-unolmJ.png" alt="Rocket Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Rocket-3B 🦝
<b>Rocket</b> 🦝 is a 3 billion large language model that was trained on a mix of publicly available datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). The prompt format used is <b>ChatML</b>.
## Model description
- **Model type:** A 3B parameter GPT-like model fine-tuned on a mix of publicly available datasets using DPO.
- **Language(s) (NLP):** Primarily English
- **License:** CC-BY-SA-4.0
- **Finetuned from model:** [Stability AI](https://huggingface.co/stabilityai/stablelm-3b-4e1t)
## Performance
Despite its compact dimensions, the model achieves outstanding scores in both MT-Bench [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks, surpassing the performance of considerably larger models.
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|-------------|-----|----|---------------|--------------|
| StableLM-Tuned-α 🦜| 7B | SFT |2.75| -|
| MPT-Chat | 7B | SFT |5.42| -|
| Falcon-Instruct 🦅| 40B | SFT |5.17 |45.71|
| Orca-2| 13B | SFT |6.15 |-|
| Xwin-LMv0.1 | 7B| PPO | 6.19| 87.83|
| Llama2-Chat 🦙| 7B |RLHF |6.26| 71.37|
| TÜLU 2 🐫| 7B | DPO |6.27| 85.1|
| Guanaco 🦙| 65B | SFT |6.41| 71.80|
| **Rocket** 🦝 | **3B** | **DPO** | **6.56** | **79.75** |
| Llama2-Chat 🦙| 13B |RLHF |6.65| 81.09|
| Zephyr-7b-α 🪁 |7B| DPO| 6.88| -|
| Vicuna v1.3 🦙| 33B | SFT |7.12 |88.99|
| Zephyr-7b-β 🪁 |7B| DPO| 7.34| 90.60|
| WizardLM v1.0 🦙| 70B |SFT |7.71 |-|
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
Specifically, across various categories within the MT-Bench evaluation, Rocket-3B demonstrates impressive performance when compared to larger open models such as Llama2-Chat-7B, Falcon-40B-Instruct, and Guanaco-65B.

## MT-Bench detailed score for first and second turn
In MT-Bench, Rocket 🦝 scores 6.99 in the first turn and 6.13 in the second turn, with an average score of 6.56. These scores reflect the model's performance in understanding and generating text during different parts of a conversation.
| Model | First turn | Second turn | Average |
|-------------|-----|----|---------------|
| **Rocket** 🦝 | **6.99** | **6.13** | **6.56** |
## AlpacaEval detailed scores
In AlpacaEval, Rocket 🦝 achieves a near 80% win rate, coupled with an average response length of 1,242 tokens, indicating its effectiveness in producing detailed responses.
| Model | Win rate | Std error | Average length |
|-------------|-----|----|---------------|
| **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
## Other benchmarks
| Metric | Value |
|-----------------------|---------------------------|
| ARC (25-shot) | 50.51 |
| HellaSwag (0-shot) | 73.91 |
| TruthfulQA (mc2) (0-shot) | 54.38 |
| BoolQ (0-shot) | 81.71 |
| Winogrande (5-shot) | 67.8 |
| GSM8K (5-shot) | 37.91 |
| MathQA (5-shot) | 31.26 |
## Intended uses & limitations
Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
## Input Format
The model is trained with the ChatML format:
```
<|im_start|>system
System message here.<|im_end|>
<|im_start|>user
Your message here!<|im_end|>
<|im_start|>assistant
```
Here's how you can run the model using 🤗 Transformers:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16)
streamer = TextStreamer(tokenizer)
prompt = """<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
"""
system = "You are a helpful assistant."
user = "How are you?"
# Apply the ChatML format
prompt = prompt.format(system=system, user=user)
# Tokenize the prompt
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.7, use_cache=True, streamer=streamer)
# <|im_start|>system
# You are a chef who makes everything sound like a secret culinary masterpiece, even everyday meals.<|im_end|>
# <|im_start|>user
# How to cook an omelette?<|im_end|>
# <|im_start|>assistant
# Ah, the art of crafting the perfect omelette, a secret culinary masterpiece indeed.
# Begin by gently whisking two to three eggs in a mixing bowl, and then pour the silky liquid into a non-stick pan.
# Allow the eggs to dance and sizzle as you swiftly tilt the pan to spread the joy throughout the entire omelette universe.
# As the edges begin to set, fold the omelette in half with a gentle flourish, and you'll witness a stunning display of culinary prowess.
# Enjoy this enchanting creation, and you'll be transported to a world of secret culinary mastery.<|im_end|>
```
## Bias, Risks, and Limitations
Unlike ChatGPT, which incorporates in-the-loop filtering of responses and is aligned during the RLHF phase for safe completions, our model lacks these features. Consequently, it may generate problematic outputs, particularly when prompted in certain ways. Below is the score of the model on Toxigen benchmark.
The pretraining dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)).
| Metric | Value |
|-----------------------|---------------------------|
| Toxigen (0-shot) | 43.40 |
**The model name is inspired by the small but formidable character from 'Guardians of the Galaxy'. Similar to its namesake, this model, with its 3 billion parameters, showcases remarkable efficiency and effectiveness, challenging larger models despite its smaller size."*
*Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md) and [Tulu-2-7B](https://huggingface.co/allenai/tulu-2-7b/blob/main/README.md)*
|
Kimata/vizuosense_gpt2_medium
|
Kimata
| 2023-11-22T19:22:11Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:Kimata/gpt_driver_dataset_processed",
"region:us"
] |
text-generation
| 2023-11-22T19:21:43Z |
---
datasets:
- Kimata/gpt_driver_dataset_processed
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
---
|
jordyvl/dit-base_tobacco-small_tobacco3482_hint
|
jordyvl
| 2023-11-22T19:10:14Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:WinKawaks/vit-small-patch16-224",
"base_model:finetune:WinKawaks/vit-small-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-20T23:44:45Z |
---
license: apache-2.0
base_model: WinKawaks/vit-small-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base_tobacco-small_tobacco3482_hint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_hint
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9099
- Accuracy: 0.85
- Brier Loss: 0.2772
- Nll: 1.4757
- F1 Micro: 0.85
- F1 Macro: 0.8366
- Ece: 0.1392
- Aurc: 0.0460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 3.2233 | 0.44 | 0.7001 | 2.8339 | 0.44 | 0.3067 | 0.2724 | 0.3661 |
| No log | 2.0 | 100 | 2.3954 | 0.705 | 0.4016 | 1.5814 | 0.705 | 0.6657 | 0.2046 | 0.1093 |
| No log | 3.0 | 150 | 2.1938 | 0.735 | 0.3560 | 1.5685 | 0.735 | 0.7026 | 0.1879 | 0.0858 |
| No log | 4.0 | 200 | 2.0989 | 0.74 | 0.3533 | 1.5416 | 0.74 | 0.7058 | 0.2015 | 0.0896 |
| No log | 5.0 | 250 | 2.0203 | 0.795 | 0.3169 | 1.5407 | 0.795 | 0.7861 | 0.1773 | 0.0919 |
| No log | 6.0 | 300 | 2.1849 | 0.675 | 0.4531 | 1.6333 | 0.675 | 0.6701 | 0.2207 | 0.1166 |
| No log | 7.0 | 350 | 2.2223 | 0.745 | 0.4113 | 1.4333 | 0.745 | 0.7293 | 0.2045 | 0.0980 |
| No log | 8.0 | 400 | 2.1696 | 0.715 | 0.4221 | 1.6537 | 0.715 | 0.6723 | 0.2069 | 0.1040 |
| No log | 9.0 | 450 | 2.4443 | 0.735 | 0.4291 | 1.5392 | 0.735 | 0.7458 | 0.2236 | 0.1323 |
| 1.8536 | 10.0 | 500 | 2.0474 | 0.775 | 0.3649 | 1.6156 | 0.775 | 0.7528 | 0.1915 | 0.0844 |
| 1.8536 | 11.0 | 550 | 2.0046 | 0.81 | 0.3170 | 1.6225 | 0.81 | 0.7920 | 0.1547 | 0.0639 |
| 1.8536 | 12.0 | 600 | 2.4864 | 0.725 | 0.4602 | 1.5678 | 0.7250 | 0.7308 | 0.2415 | 0.1218 |
| 1.8536 | 13.0 | 650 | 1.8413 | 0.83 | 0.2698 | 1.6361 | 0.83 | 0.8117 | 0.1349 | 0.0674 |
| 1.8536 | 14.0 | 700 | 2.1304 | 0.815 | 0.3281 | 1.5685 | 0.815 | 0.7936 | 0.1715 | 0.0703 |
| 1.8536 | 15.0 | 750 | 2.5075 | 0.71 | 0.4652 | 1.9297 | 0.7100 | 0.6877 | 0.2281 | 0.1099 |
| 1.8536 | 16.0 | 800 | 2.4854 | 0.73 | 0.4462 | 1.5241 | 0.7300 | 0.7176 | 0.2282 | 0.1097 |
| 1.8536 | 17.0 | 850 | 2.1252 | 0.805 | 0.3210 | 1.5685 | 0.805 | 0.7907 | 0.1650 | 0.0804 |
| 1.8536 | 18.0 | 900 | 1.9249 | 0.86 | 0.2473 | 1.7031 | 0.8600 | 0.8689 | 0.1244 | 0.0528 |
| 1.8536 | 19.0 | 950 | 2.0943 | 0.835 | 0.2840 | 1.4696 | 0.835 | 0.8267 | 0.1439 | 0.0652 |
| 1.0941 | 20.0 | 1000 | 1.8548 | 0.845 | 0.2566 | 1.3059 | 0.845 | 0.8403 | 0.1333 | 0.0558 |
| 1.0941 | 21.0 | 1050 | 2.1487 | 0.805 | 0.3362 | 1.4556 | 0.805 | 0.8051 | 0.1665 | 0.0764 |
| 1.0941 | 22.0 | 1100 | 2.2147 | 0.81 | 0.3149 | 1.4884 | 0.81 | 0.8081 | 0.1710 | 0.0984 |
| 1.0941 | 23.0 | 1150 | 2.1111 | 0.84 | 0.2898 | 1.5426 | 0.8400 | 0.8410 | 0.1489 | 0.0848 |
| 1.0941 | 24.0 | 1200 | 2.2432 | 0.85 | 0.2884 | 1.7273 | 0.85 | 0.8482 | 0.1532 | 0.0765 |
| 1.0941 | 25.0 | 1250 | 2.3105 | 0.75 | 0.4190 | 1.4648 | 0.75 | 0.7396 | 0.2177 | 0.1074 |
| 1.0941 | 26.0 | 1300 | 2.0587 | 0.795 | 0.3444 | 1.6181 | 0.795 | 0.7960 | 0.1641 | 0.0799 |
| 1.0941 | 27.0 | 1350 | 2.4465 | 0.8 | 0.3517 | 2.0076 | 0.8000 | 0.7770 | 0.1731 | 0.0849 |
| 1.0941 | 28.0 | 1400 | 2.1351 | 0.825 | 0.3132 | 1.5650 | 0.825 | 0.8315 | 0.1631 | 0.0553 |
| 1.0941 | 29.0 | 1450 | 1.9746 | 0.86 | 0.2451 | 1.5908 | 0.8600 | 0.8374 | 0.1267 | 0.0537 |
| 0.9575 | 30.0 | 1500 | 2.0257 | 0.855 | 0.2737 | 1.6541 | 0.855 | 0.8121 | 0.1352 | 0.0480 |
| 0.9575 | 31.0 | 1550 | 1.9631 | 0.84 | 0.3037 | 1.7341 | 0.8400 | 0.8201 | 0.1515 | 0.0423 |
| 0.9575 | 32.0 | 1600 | 2.4215 | 0.785 | 0.3909 | 1.4042 | 0.785 | 0.7740 | 0.2018 | 0.0708 |
| 0.9575 | 33.0 | 1650 | 2.2159 | 0.795 | 0.3492 | 1.7639 | 0.795 | 0.7716 | 0.1721 | 0.0537 |
| 0.9575 | 34.0 | 1700 | 2.3363 | 0.82 | 0.3132 | 1.9858 | 0.82 | 0.7993 | 0.1610 | 0.0845 |
| 0.9575 | 35.0 | 1750 | 2.2187 | 0.84 | 0.2884 | 1.5376 | 0.8400 | 0.8182 | 0.1523 | 0.0803 |
| 0.9575 | 36.0 | 1800 | 2.3407 | 0.825 | 0.3206 | 1.8292 | 0.825 | 0.8028 | 0.1588 | 0.0719 |
| 0.9575 | 37.0 | 1850 | 2.4302 | 0.815 | 0.3353 | 1.7611 | 0.815 | 0.8091 | 0.1654 | 0.0920 |
| 0.9575 | 38.0 | 1900 | 2.3307 | 0.815 | 0.3269 | 1.8263 | 0.815 | 0.8043 | 0.1675 | 0.0876 |
| 0.9575 | 39.0 | 1950 | 2.2905 | 0.825 | 0.3217 | 1.7612 | 0.825 | 0.8116 | 0.1639 | 0.0841 |
| 0.8923 | 40.0 | 2000 | 2.2699 | 0.83 | 0.3225 | 1.7537 | 0.83 | 0.8186 | 0.1655 | 0.0792 |
| 0.8923 | 41.0 | 2050 | 2.2327 | 0.83 | 0.3179 | 1.7534 | 0.83 | 0.8186 | 0.1559 | 0.0764 |
| 0.8923 | 42.0 | 2100 | 2.2852 | 0.825 | 0.3230 | 1.6737 | 0.825 | 0.8150 | 0.1611 | 0.0760 |
| 0.8923 | 43.0 | 2150 | 2.2597 | 0.825 | 0.3221 | 1.6727 | 0.825 | 0.8147 | 0.1610 | 0.0734 |
| 0.8923 | 44.0 | 2200 | 2.2492 | 0.83 | 0.3176 | 1.6692 | 0.83 | 0.8169 | 0.1619 | 0.0720 |
| 0.8923 | 45.0 | 2250 | 2.2208 | 0.825 | 0.3182 | 1.6737 | 0.825 | 0.8124 | 0.1627 | 0.0707 |
| 0.8923 | 46.0 | 2300 | 2.2192 | 0.825 | 0.3209 | 1.6771 | 0.825 | 0.8121 | 0.1650 | 0.0712 |
| 0.8923 | 47.0 | 2350 | 2.2127 | 0.825 | 0.3198 | 1.6187 | 0.825 | 0.8124 | 0.1636 | 0.0684 |
| 0.8923 | 48.0 | 2400 | 2.2079 | 0.825 | 0.3208 | 1.6760 | 0.825 | 0.8121 | 0.1632 | 0.0707 |
| 0.8923 | 49.0 | 2450 | 2.1995 | 0.825 | 0.3187 | 1.5377 | 0.825 | 0.8124 | 0.1656 | 0.0702 |
| 0.8511 | 50.0 | 2500 | 2.1877 | 0.825 | 0.3158 | 1.6098 | 0.825 | 0.8124 | 0.1600 | 0.0690 |
| 0.8511 | 51.0 | 2550 | 2.1698 | 0.825 | 0.3167 | 1.5353 | 0.825 | 0.8124 | 0.1607 | 0.0695 |
| 0.8511 | 52.0 | 2600 | 2.1667 | 0.825 | 0.3133 | 1.5303 | 0.825 | 0.8121 | 0.1596 | 0.0680 |
| 0.8511 | 53.0 | 2650 | 2.1791 | 0.83 | 0.3170 | 1.5332 | 0.83 | 0.8149 | 0.1608 | 0.0690 |
| 0.8511 | 54.0 | 2700 | 2.1621 | 0.83 | 0.3148 | 1.5274 | 0.83 | 0.8146 | 0.1551 | 0.0693 |
| 0.8511 | 55.0 | 2750 | 2.1572 | 0.83 | 0.3119 | 1.5318 | 0.83 | 0.8149 | 0.1532 | 0.0680 |
| 0.8511 | 56.0 | 2800 | 2.1587 | 0.83 | 0.3100 | 1.5232 | 0.83 | 0.8148 | 0.1524 | 0.0712 |
| 0.8511 | 57.0 | 2850 | 2.1596 | 0.83 | 0.3101 | 1.5234 | 0.83 | 0.8146 | 0.1560 | 0.0696 |
| 0.8511 | 58.0 | 2900 | 2.1048 | 0.835 | 0.3047 | 1.5231 | 0.835 | 0.8189 | 0.1442 | 0.0676 |
| 0.8511 | 59.0 | 2950 | 2.4279 | 0.76 | 0.4096 | 1.4535 | 0.76 | 0.7538 | 0.2078 | 0.0731 |
| 0.8335 | 60.0 | 3000 | 2.2098 | 0.775 | 0.4036 | 1.4180 | 0.775 | 0.7565 | 0.2010 | 0.0870 |
| 0.8335 | 61.0 | 3050 | 2.0122 | 0.85 | 0.2596 | 1.5903 | 0.85 | 0.8272 | 0.1349 | 0.0779 |
| 0.8335 | 62.0 | 3100 | 2.2465 | 0.815 | 0.3311 | 1.6852 | 0.815 | 0.7899 | 0.1672 | 0.0658 |
| 0.8335 | 63.0 | 3150 | 2.1239 | 0.84 | 0.2963 | 1.6390 | 0.8400 | 0.8305 | 0.1458 | 0.0878 |
| 0.8335 | 64.0 | 3200 | 2.1931 | 0.82 | 0.3181 | 1.7037 | 0.82 | 0.8199 | 0.1654 | 0.0719 |
| 0.8335 | 65.0 | 3250 | 1.8262 | 0.855 | 0.2493 | 1.4845 | 0.855 | 0.8335 | 0.1297 | 0.0456 |
| 0.8335 | 66.0 | 3300 | 1.9467 | 0.845 | 0.2657 | 1.4217 | 0.845 | 0.8326 | 0.1361 | 0.0498 |
| 0.8335 | 67.0 | 3350 | 1.9371 | 0.85 | 0.2680 | 1.4175 | 0.85 | 0.8405 | 0.1293 | 0.0506 |
| 0.8335 | 68.0 | 3400 | 1.9172 | 0.85 | 0.2656 | 1.4203 | 0.85 | 0.8405 | 0.1331 | 0.0503 |
| 0.8335 | 69.0 | 3450 | 1.8872 | 0.845 | 0.2664 | 1.4327 | 0.845 | 0.8324 | 0.1360 | 0.0493 |
| 0.8281 | 70.0 | 3500 | 1.9045 | 0.845 | 0.2715 | 1.4920 | 0.845 | 0.8324 | 0.1377 | 0.0496 |
| 0.8281 | 71.0 | 3550 | 1.8954 | 0.845 | 0.2684 | 1.4919 | 0.845 | 0.8338 | 0.1385 | 0.0499 |
| 0.8281 | 72.0 | 3600 | 1.9222 | 0.85 | 0.2698 | 1.4870 | 0.85 | 0.8375 | 0.1356 | 0.0499 |
| 0.8281 | 73.0 | 3650 | 1.9004 | 0.845 | 0.2691 | 1.4912 | 0.845 | 0.8335 | 0.1377 | 0.0484 |
| 0.8281 | 74.0 | 3700 | 1.9168 | 0.85 | 0.2693 | 1.4903 | 0.85 | 0.8375 | 0.1338 | 0.0495 |
| 0.8281 | 75.0 | 3750 | 1.8970 | 0.85 | 0.2700 | 1.4908 | 0.85 | 0.8366 | 0.1416 | 0.0477 |
| 0.8281 | 76.0 | 3800 | 1.9089 | 0.85 | 0.2705 | 1.4867 | 0.85 | 0.8366 | 0.1373 | 0.0480 |
| 0.8281 | 77.0 | 3850 | 1.8902 | 0.85 | 0.2697 | 1.4896 | 0.85 | 0.8366 | 0.1407 | 0.0464 |
| 0.8281 | 78.0 | 3900 | 1.8889 | 0.85 | 0.2710 | 1.4882 | 0.85 | 0.8366 | 0.1421 | 0.0472 |
| 0.8281 | 79.0 | 3950 | 1.9080 | 0.85 | 0.2712 | 1.4876 | 0.85 | 0.8366 | 0.1345 | 0.0476 |
| 0.8047 | 80.0 | 4000 | 1.9011 | 0.85 | 0.2703 | 1.4864 | 0.85 | 0.8366 | 0.1373 | 0.0472 |
| 0.8047 | 81.0 | 4050 | 1.9112 | 0.85 | 0.2735 | 1.4867 | 0.85 | 0.8366 | 0.1379 | 0.0465 |
| 0.8047 | 82.0 | 4100 | 1.8850 | 0.85 | 0.2728 | 1.4872 | 0.85 | 0.8366 | 0.1419 | 0.0462 |
| 0.8047 | 83.0 | 4150 | 1.9074 | 0.85 | 0.2740 | 1.4862 | 0.85 | 0.8366 | 0.1369 | 0.0463 |
| 0.8047 | 84.0 | 4200 | 1.8804 | 0.85 | 0.2714 | 1.4818 | 0.85 | 0.8366 | 0.1376 | 0.0461 |
| 0.8047 | 85.0 | 4250 | 1.9092 | 0.85 | 0.2757 | 1.4825 | 0.85 | 0.8366 | 0.1437 | 0.0463 |
| 0.8047 | 86.0 | 4300 | 1.8985 | 0.85 | 0.2745 | 1.4827 | 0.85 | 0.8366 | 0.1390 | 0.0460 |
| 0.8047 | 87.0 | 4350 | 1.9091 | 0.85 | 0.2731 | 1.4808 | 0.85 | 0.8366 | 0.1403 | 0.0466 |
| 0.8047 | 88.0 | 4400 | 1.9037 | 0.85 | 0.2754 | 1.4836 | 0.85 | 0.8366 | 0.1383 | 0.0459 |
| 0.8047 | 89.0 | 4450 | 1.8950 | 0.85 | 0.2750 | 1.4798 | 0.85 | 0.8366 | 0.1386 | 0.0452 |
| 0.7971 | 90.0 | 4500 | 1.9115 | 0.85 | 0.2755 | 1.4785 | 0.85 | 0.8366 | 0.1387 | 0.0461 |
| 0.7971 | 91.0 | 4550 | 1.9061 | 0.85 | 0.2757 | 1.4791 | 0.85 | 0.8366 | 0.1451 | 0.0460 |
| 0.7971 | 92.0 | 4600 | 1.9058 | 0.85 | 0.2757 | 1.4785 | 0.85 | 0.8366 | 0.1392 | 0.0464 |
| 0.7971 | 93.0 | 4650 | 1.9128 | 0.85 | 0.2724 | 1.4769 | 0.85 | 0.8366 | 0.1341 | 0.0468 |
| 0.7971 | 94.0 | 4700 | 1.9115 | 0.85 | 0.2770 | 1.4771 | 0.85 | 0.8366 | 0.1388 | 0.0463 |
| 0.7971 | 95.0 | 4750 | 1.9097 | 0.85 | 0.2761 | 1.4761 | 0.85 | 0.8366 | 0.1382 | 0.0462 |
| 0.7971 | 96.0 | 4800 | 1.9025 | 0.85 | 0.2761 | 1.4759 | 0.85 | 0.8366 | 0.1385 | 0.0460 |
| 0.7971 | 97.0 | 4850 | 1.9153 | 0.85 | 0.2775 | 1.4757 | 0.85 | 0.8366 | 0.1394 | 0.0463 |
| 0.7971 | 98.0 | 4900 | 1.9084 | 0.85 | 0.2765 | 1.4755 | 0.85 | 0.8366 | 0.1388 | 0.0460 |
| 0.7971 | 99.0 | 4950 | 1.9087 | 0.85 | 0.2772 | 1.4757 | 0.85 | 0.8366 | 0.1392 | 0.0460 |
| 0.7931 | 100.0 | 5000 | 1.9099 | 0.85 | 0.2772 | 1.4757 | 0.85 | 0.8366 | 0.1392 | 0.0460 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
n2vec/sentence-transformers_msmarco-MiniLM-L6-cos-v5
|
n2vec
| 2023-11-22T19:03:08Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T19:01:58Z |
---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-MiniLM-L6-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L6-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
dunchych/results
|
dunchych
| 2023-11-22T18:59:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T02:08:43Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1901 | 1.0 | 4500 | 0.1896 |
| 0.227 | 2.0 | 9000 | 0.1892 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0.dev20231121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
allenai/System3_DREAM_FLUTE_all_dimensions_FigLang2022
|
allenai
| 2023-11-22T18:55:04Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:ColumbiaNLP/FLUTE",
"arxiv:2210.16407",
"arxiv:2112.08656",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-18T23:14:23Z |
---
language: en
license: cc-by-4.0
library_name: transformers
datasets: ColumbiaNLP/FLUTE
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (all 4 dimensions), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (all 4 dimensions), we use elaborations along all DREAM dimensions. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (all 4 dimensions) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_all_dimensions_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: I was really looking forward to camping but now it is going to rain so I won't go. [Premise - social norm] It's okay to be disappointed when plans change. [Premise - emotion] I (myself)'s emotion is disappointed. [Premise - motivation] I (myself)'s motivation is to stay home. [Premise - likely consequence] I will miss out on a great experience and be bored and sad. Hypothesis: I am absolutely elated at the prospects of getting drenched in the rain and then sleep in a wet tent just to have the experience of camping. [Hypothesis - social norm] It's good to want to have new experiences. [Hypothesis - emotion] I (myself)'s emotion is excited. [Hypothesis - motivation] I (myself)'s motivation is to have fun. [Hypothesis - likely consequence] I am so excited that I forget to bring a raincoat and my tent gets soaked. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['Answer : Contradiction. Explanation : Camping in the rain is often associated with the prospect of getting wet and cold, so someone who is elated about it is not being rational.']
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7499
- Rouge1: 58.5551
- Rouge2: 38.5673
- Rougel: 52.3701
- Rougelsum: 52.335
- Gen Len: 40.7452
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.992 | 0.33 | 1000 | 0.8911 | 39.9287 | 27.5817 | 38.2127 | 38.2042 | 19.0 |
| 0.9022 | 0.66 | 2000 | 0.8409 | 40.8873 | 28.7963 | 39.16 | 39.1615 | 19.0 |
| 0.8744 | 1.0 | 3000 | 0.7813 | 41.2617 | 29.5498 | 39.5857 | 39.5695 | 19.0 |
| 0.5636 | 1.33 | 4000 | 0.7961 | 41.1429 | 30.2299 | 39.6592 | 39.6648 | 19.0 |
| 0.5585 | 1.66 | 5000 | 0.7763 | 41.2581 | 30.0851 | 39.6859 | 39.68 | 19.0 |
| 0.5363 | 1.99 | 6000 | 0.7499 | 41.8302 | 30.964 | 40.3059 | 40.2964 | 19.0 |
| 0.3347 | 2.32 | 7000 | 0.8540 | 41.4633 | 30.6209 | 39.9933 | 39.9948 | 18.9954 |
| 0.341 | 2.65 | 8000 | 0.8599 | 41.6576 | 31.0316 | 40.1466 | 40.1526 | 18.9907 |
| 0.3531 | 2.99 | 9000 | 0.8368 | 42.05 | 31.6387 | 40.6239 | 40.6254 | 18.9907 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
TeresaK/vulnerability_analysisT
|
TeresaK
| 2023-11-22T18:52:55Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-11-22T18:25:18Z |
---
title: Vulnerable Groups
emoji: 🦀
colorFrom: blue
colorTo: pink
sdk: streamlit
sdk_version: 1.21.0
app_file: app.py
pinned: false
license: openrail
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
n2vec/cross-encoder_ms-marco-MiniLM-L-6-v2
|
n2vec
| 2023-11-22T18:49:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-22T18:47:22Z |
---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
0xJCarlos/my_awesome_qa_model
|
0xJCarlos
| 2023-11-22T18:43:05Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-22T18:30:21Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: 0xJCarlos/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# 0xJCarlos/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6940
- Validation Loss: 1.8597
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5919 | 2.3177 | 0 |
| 1.9745 | 1.8597 | 1 |
| 1.6940 | 1.8597 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
shiiiiiiiiii/codellama2-finetuned
|
shiiiiiiiiii
| 2023-11-22T18:39:48Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2023-11-22T18:18:22Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: codellama2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama2-finetuned
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.13.0
- Tokenizers 0.15.0
|
hanyoonsang/esp_donut_train_swinv2_bart_10
|
hanyoonsang
| 2023-11-22T18:38:57Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-11-22T13:55:30Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: esp_donut_train_swinv2_bart_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esp_donut_train_swinv2_bart_10
This model was trained from scratch on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.11.0+cu113
- Datasets 2.8.0
- Tokenizers 0.13.3
|
Mariavladii333/Sythesis
|
Mariavladii333
| 2023-11-22T18:32:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"art",
"summarization",
"en",
"dataset:vivym/midjourney-messages",
"license:creativeml-openrail-m",
"region:us"
] |
summarization
| 2023-11-22T18:31:13Z |
---
license: creativeml-openrail-m
datasets:
- vivym/midjourney-messages
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: summarization
tags:
- art
---
|
DoctorDiffusion/doctor-diffusion-s-claymation-style-lora
|
DoctorDiffusion
| 2023-11-22T18:32:28Z | 45 | 9 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"claymation",
"clay",
"colorful",
"style",
"artstyle",
"styles",
"stopmotion",
"modeling clay",
"made of clay",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-11-22T18:32:26Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- claymation
- clay
- colorful
- style
- artstyle
- styles
- stopmotion
- modeling clay
- made of clay
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: made-of-clay
widget:
- text: ' '
output:
url: >-
3328278.jpeg
- text: ' '
output:
url: >-
3304737.jpeg
- text: ' '
output:
url: >-
3304577.jpeg
- text: ' '
output:
url: >-
3304844.jpeg
- text: ' '
output:
url: >-
3304788.jpeg
- text: ' '
output:
url: >-
3304843.jpeg
- text: ' '
output:
url: >-
3304783.jpeg
- text: ' '
output:
url: >-
3304884.jpeg
- text: ' '
output:
url: >-
3304885.jpeg
---
# Doctor Diffusion's Claymation Style LoRA
<Gallery />
## Model description
<p>Use "<strong>made-of-clay</strong>" in prompt to create captivating clay concepts.<br /><br /><span style="color:rgb(17, 17, 17)">☕ </span>Like what I do? <span style="color:rgb(17, 17, 17)">☕</span><br /><span style="color:rgb(17, 17, 17)">☕ </span><a rel="ugc" href="https://www.buymeacoffee.com/doctordiffusion">Buy me a coffee or two</a>! <span style="color:rgb(17, 17, 17)">☕</span></p>
## Trigger words
You should use `made-of-clay` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/DoctorDiffusion/doctor-diffusion-s-claymation-style-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DoctorDiffusion/doctor-diffusion-s-claymation-style-lora', weight_name='DD-made-of-clay-XL-v2.safetensors')
image = pipeline('`made-of-clay`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
dongjunguo/esm2_t12_35M_UR50D-finetuned-localization
|
dongjunguo
| 2023-11-22T18:25:38Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-22T11:32:17Z |
---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esm2_t12_35M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6460
- Accuracy: 0.5988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6832 | 1.0 | 1539 | 0.6716 | 0.5395 |
| 0.6411 | 2.0 | 3078 | 0.6368 | 0.5798 |
| 0.6178 | 3.0 | 4617 | 0.6281 | 0.5901 |
| 0.5951 | 4.0 | 6156 | 0.6293 | 0.5947 |
| 0.5682 | 5.0 | 7695 | 0.6460 | 0.5988 |
| 0.5395 | 6.0 | 9234 | 0.6822 | 0.5930 |
| 0.5107 | 7.0 | 10773 | 0.7306 | 0.5935 |
| 0.484 | 8.0 | 12312 | 0.7839 | 0.5906 |
| 0.4562 | 9.0 | 13851 | 0.8433 | 0.5917 |
| 0.4302 | 10.0 | 15390 | 0.8882 | 0.5893 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Erika1964/erikaai
|
Erika1964
| 2023-11-22T17:56:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T17:49:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### erikaai Dreambooth model trained by Erika1964 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
potyoresz/kataai
|
potyoresz
| 2023-11-22T17:54:44Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T17:50:07Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kataai Dreambooth model trained by potyoresz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ProsESportu/mpweb
|
ProsESportu
| 2023-11-22T17:52:21Z | 3 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"mpnet",
"fill-mask",
"feature-extraction",
"license:mit",
"region:us"
] |
feature-extraction
| 2023-11-22T17:49:32Z |
---
license: mit
library_name: transformers.js
pipeline_tag: feature-extraction
---
|
DaColdest/Reinforce-CartPole-v1
|
DaColdest
| 2023-11-22T17:51:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T17:51:30Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tabtap/sd-v1-5.safetensor
|
tabtap
| 2023-11-22T17:51:22Z | 57 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-11-22T17:50:22Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
china girl hottie, fine eyes++, confident, super seductive, super
alluring creamy, seductive pose, brothel, wet skin, wet big breasts, no
shirt bra ++, 8k portrait of beautiful with fine hair, intricate,
elegant, highly detailed, majestic, digital photography, art by artgerm
and ruan jia and greg rutkowski surreal painting gold butterfly filigree,
broken glass, (masterpiece, sidelighting, finely detailed beautiful eyes:
1.2), hdr, water magic, corneo_tentacle_sex, feel very enjoy,
parameters:
negative_prompt: >-
(3d, render CGI, doll, painting, fake, cartoon, 3d modelling:1.4),
(worst quality, low quality:1.4), monochrome, deformed, malformed,
deformed face, bad teeth, bad hands, bad fingers, bad eyes, long
body, blurry, duplicate, cloned, duplicate body parts, disfigured,
extra limbs, fused fingers, extra fingers, twisted, distorted,
malformed hands, mutated hands and fingers, conjoined, missing limbs,
bad anatomy, bad proportions, logo, watermark, text, copyright,
signatures, lo-res, mutated, mutilated, artefacts, gross, ugly,
hand in pocket, too many fingers on one hand, conjoined bodies,
incomplete fingers, weird fingernails, displaced fingernails, double
belly-button
output:
url: images/pu5PIZeNNONiFvGdYhrhS.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# sd-v1-5
<Gallery />
## Model description

## Download model
Weights for this model are available in Safetensors format.
[Download](/tabtap/sd-v1-5.safetensor/tree/main) them in the Files & versions tab.
|
perevalov/SMatBERT-NER-multilingual
|
perevalov
| 2023-11-22T17:47:14Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"material science",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-20T20:49:59Z |
---
metrics:
- precision
- recall
- f1
pipeline_tag: token-classification
tags:
- material science
---
# SpringerMaterials BERT NER multilingual
This model was fine-tuned on NER-labeled user queries from [Springer Materials](https://materials.springer.com/). The entity types supported by this model are: `SUBSTANCE`, `PROPERTY`.
The fine-tuning data is available [here](https://doi.org/10.6084/m9.figshare.24592428).
## Input examples
* 🇬🇧 `ethylene's phase diagram`
* 🇩🇪 `Phasendiagramm des Ethylens`
* 🇷🇺 `фазовая диаграмма этилена`
## How to use this model
```python
from transformers import pipeline
model_checkpoint = "perevalov/SMatBERT-NER-multilingual"
token_classifier = pipeline(
"ner", model=model_checkpoint, aggregation_strategy="simple"
)
token_classifier("boiling point of water")
```
|
skiszkao/skiszkaoai
|
skiszkao
| 2023-11-22T17:43:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T17:39:00Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### skiszkaoai Dreambooth model trained by skiszkao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
abolton99/religion_politics_gambling
|
abolton99
| 2023-11-22T17:36:28Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-11-22T17:33:08Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/gr/47hycvx13rd_q25kzttvfx6h0000gn/T/tmp2votl677/abolton99/religion_politics_gambling
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/gr/47hycvx13rd_q25kzttvfx6h0000gn/T/tmp2votl677/abolton99/religion_politics_gambling")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sajjadamjad/ghostwriter_v4_model_merged
|
sajjadamjad
| 2023-11-22T17:25:24Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"arxiv:1910.09700",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-11-22T17:17:48Z |
---
library_name: peft
base_model: bigscience/bloom-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
Frenger/shaojiang
|
Frenger
| 2023-11-22T17:22:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T17:17:17Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### shaojiang Dreambooth model trained by Frenger with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:





|
tr-aravindan/distilbert-base-uncased-finetuned-amazon-product-reviews
|
tr-aravindan
| 2023-11-22T17:22:00Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-22T16:22:44Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: tr-aravindan/distilbert-base-uncased-finetuned-amazon-product-reviews
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tr-aravindan/distilbert-base-uncased-finetuned-amazon-product-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2468
- Validation Loss: 2.1492
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1656, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.3818 | 2.1513 | 0 |
| 2.2468 | 2.1492 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
StevenPerrin/ppo-LunarLander-v2
|
StevenPerrin
| 2023-11-22T17:18:48Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-05T15:29:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.25 +/- 18.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ekkologico/Llama-2-7b-chat-python_code_instructions_tiny_codes
|
Ekkologico
| 2023-11-22T17:13:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-11-22T15:20:49Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
joddiy/my_awesome_wnut_model
|
joddiy
| 2023-11-22T17:06:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-22T17:03:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5514285714285714
- name: Recall
type: recall
value: 0.3577386468952734
- name: F1
type: f1
value: 0.43395165823496346
- name: Accuracy
type: accuracy
value: 0.9444230687016374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2758
- Precision: 0.5514
- Recall: 0.3577
- F1: 0.4340
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2644 | 0.5355 | 0.3568 | 0.4283 | 0.9438 |
| No log | 2.0 | 426 | 0.2758 | 0.5514 | 0.3577 | 0.4340 | 0.9444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AswanthCManoj/azma-tiny-llama-lora-adapter
|
AswanthCManoj
| 2023-11-22T17:04:55Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"region:us"
] | null | 2023-11-22T17:04:50Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.