modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 00:47:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 00:46:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SouhilOuchene/ACCBERT_Part2
|
SouhilOuchene
| 2023-06-20T22:29:32Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-20T22:29:19Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# SouhilOuchene/ACCBERT_Part2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("SouhilOuchene/ACCBERT_Part2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
SouhilOuchene/NACCBERT_Part2
|
SouhilOuchene
| 2023-06-20T22:29:19Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-20T22:29:05Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# SouhilOuchene/NACCBERT_Part2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("SouhilOuchene/NACCBERT_Part2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
TheBloke/airoboros-7B-gpt4-1.3-GGML
|
TheBloke
| 2023-06-20T22:27:39Z | 0 | 3 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:other",
"region:us"
] | null | 2023-06-20T09:08:15Z |
---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.3
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Jon Durbin's Airoboros 7B GPT4 1.3 GGML
These files are GGML format model files for [Jon Durbin's Airoboros 7B GPT4 1.3](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.3).
**Note from model creator Jon Durbin: This version has problems, use if you dare, or wait for 1.4.**
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.3-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.3)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-7b-gpt4-1.3.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airoboros-7b-gpt4-1.3.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-7b-gpt4-1.3.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-7b-gpt4-1.3.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airoboros-7b-gpt4-1.3.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-7b-gpt4-1.3.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-7b-gpt4-1.3.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airoboros-7b-gpt4-1.3.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airoboros-7b-gpt4-1.3.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-7b-gpt4-1.3.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-7b-gpt4-1.3.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airoboros-7b-gpt4-1.3.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airoboros-7b-gpt4-1.3.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airoboros-7b-gpt4-1.3.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airoboros-7b-gpt4-1.3.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros 7B GPT4 1.3
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
|
TheBloke/airoboros-33B-gpt4-1.3-GGML
|
TheBloke
| 2023-06-20T22:26:20Z | 0 | 3 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:other",
"region:us"
] | null | 2023-06-20T17:21:24Z |
---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.3
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Jon Durbin's Airoboros 33B GPT4 1.3 GGML
These files are GGML format model files for [Jon Durbin's Airoboros 33B GPT4 1.3](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3).
**Note from model creator Jon Durbin: This version has problems, use if you dare, or wait for 1.4.**
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-33b-gpt4-1.3.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airoboros-33b-gpt4-1.3.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4-1.3.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4-1.3.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airoboros-33b-gpt4-1.3.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-33b-gpt4-1.3.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-33b-gpt4-1.3.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airoboros-33b-gpt4-1.3.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airoboros-33b-gpt4-1.3.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-33b-gpt4-1.3.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-33b-gpt4-1.3.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airoboros-33b-gpt4-1.3.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airoboros-33b-gpt4-1.3.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airoboros-33b-gpt4-1.3.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airoboros-33b-gpt4-1.3.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros 33B GPT4 1.3
_Not tested yet, use if you want, but I would probably wait for 1.4!_
### Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
|
saitejad/vit-base-patch16-224-finetuned-flower
|
saitejad
| 2023-06-20T22:10:34Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-20T21:58:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
jackoyoungblood/TinyStories
|
jackoyoungblood
| 2023-06-20T21:58:58Z | 148 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T19:43:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: TinyStories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyStories
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7867 | 0.4 | 5000 | 1.3095 |
| 1.2654 | 0.81 | 10000 | 1.1701 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
el254/Ride
|
el254
| 2023-06-20T21:26:23Z | 0 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2023-06-20T19:38:14Z |
---
library_name: keras
---
---
library_name: keras
--
# Распознавание класса цифр на датасете mnist.
# Задача НС
Модель генерирует цифру похожую на цифру из датасета mnist
## Изображение послойной архитектуры:
.png)
## Общее количество обучаемых параметров
Обучемых параметров: 54,160
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `categorical_crossentropy`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 2511.731201171875
Train Accuracy: 0.7256483435630798
Test Loss: 2534.3447265625
Test Accuracy: 0.7262243628501892
Validation Loss: 2534.3447265625
Validation Accuracy: 0.7262243628501892
|
Theiss/q-FrozenLake-v1-4x4-noSlippery
|
Theiss
| 2023-06-20T21:24:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T21:24:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Theiss/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheBloke/baichuan-llama-7B-GGML
|
TheBloke
| 2023-06-20T21:11:51Z | 0 | 11 | null |
[
"text-generation",
"zh",
"en",
"arxiv:1910.07467",
"arxiv:2009.03300",
"license:other",
"region:us"
] |
text-generation
| 2023-06-20T20:35:24Z |
---
inference: false
license: other
language:
- zh
- en
pipeline_tag: text-generation
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Fire Balloon's Baichuan Llama 7B GGML
These files are GGML format model files for [Fire Balloon's Baichuan Llama 7B](https://huggingface.co/fireballoon/baichuan-llama-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
This model is a Llama conversion of [Baichuan Inc's Baichuan 7B]https://huggingface.co/baichuan-inc/baichuan-7B). It contains the same data, but rewritten by Fire Balloon into the familiar Llama format.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-llama-7b)
## Prompt template
A general prompt template is unknown at this point.
The example given in the README is a 1-shot categorisation:
```
Hamlet->Shakespeare\nOne Hundred Years of Solitude->
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| baichuan-llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 3.02 GB | 5.52 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| baichuan-llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.76 GB | 6.26 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.45 GB | 5.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 3.11 GB | 5.61 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| baichuan-llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.94 GB | 6.44 GB | Original llama.cpp quant method, 4-bit. |
| baichuan-llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.38 GB | 6.88 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| baichuan-llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.26 GB | 6.76 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| baichuan-llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 4.01 GB | 6.51 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| baichuan-llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.81 GB | 7.31 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| baichuan-llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.25 GB | 7.75 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| baichuan-llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.98 GB | 7.48 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| baichuan-llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.85 GB | 7.35 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| baichuan-llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.74 GB | 8.24 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| baichuan-llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.44 GB | 9.94 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m baichuan-llama-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Fire Balloon's Baichuan Llama 7B
# baichuan-llama-7B
使用[LLaMA](https://huggingface.co/huggyllama/llama-7b)格式保存的[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)。可以直接使用LlamaForCausalLM和LlamaTokenizer加载。
[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) model saved in the format of the [LLaMA](https://huggingface.co/huggyllama/llama-7b) model. You can directly use LlamaForCausalLM and LlamaTokenizer to load the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
```
The following is from the original repo [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B).
# baichuan-7B
<!-- Provide a quick summary of what the model is/does. -->
baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
如果希望使用baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B)。
baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
If you wish to use baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
## Why use baichuan-7B
- 在同尺寸模型中baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
- baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
- 不同于LLaMA完全禁止商业使用,baichuan-7B使用更宽松的开源协议,允许用于商业目的
- Among models of the same size, baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
- baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
- Unlike LLaMA, which completely prohibits commercial use, baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
## How to Get Started with the Model
如下是一个使用baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
The following is a task of performing 1-shot inference using baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: opensource@baichuan-inc.com
- **Language(s) (NLP):** Chinese/English
- **License:** [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
### Model Sources
<!-- Provide the basic links for the model. -->
整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计
- **Position Embedding**:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。
- **Feedforward Layer**:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。
- **Layer Normalization**: 基于[RMSNorm](https://arxiv.org/abs/1910.07467)的Pre-Normalization。
具体参数和见下表
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA:
- Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities.
- Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008.
- Layer Normalization: Pre-Normalization based on [RMSNorm](https://arxiv.org/abs/1910.07467).
The specific parameters are as follows:
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B)。
We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Training Details
训练具体设置参见[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B)。
For specific training settings, please refer to [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
## Evaluation
### 中文评测
#### C-Eval
[CEval数据集](https://cevalbenchmark.com/index.html)是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。
| Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others |
|-----------------------------|---------|-----------|------|-----------------|------------|--------|
| GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 |
| ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 |
| Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 |
| Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 |
| moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 |
| Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 |
| LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 |
| ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 |
| Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 |
| Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 |
| TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 |
| Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
| BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
| BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
| **baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
#### Gaokao
[Gaokao](https://github.com/ExpressAI/AI-Gaokao) 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。
我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。
以下是测试的结果。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 21.41 |
| Ziya-LLaMA-13B-pretrain | 23.17 |
| Falcon-7B | 23.98 |
| TigerBot-7B-base | 25.94 |
| LLaMA-7B | 27.81 |
| ChatGLM-6B | 21.41 |
| BLOOM-7B | 26.96 |
| BLOOMZ-7B | 28.72 |
| Aquila-7B<sup>*</sup> | 24.39 |
| **baichuan-7B** | **36.24** |
#### AGIEval
[AGIEval](https://github.com/microsoft/AGIEval) 旨在评估模型的认知和解决问题相关的任务中的一般能力。
我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 23.49 |
| Ziya-LLaMA-13B-pretrain | 27.64 |
| Falcon-7B | 27.18 |
| TigerBot-7B-base | 25.19 |
| LLaMA-7B | 28.17 |
| ChatGLM-6B | 23.49 |
| BLOOM-7B | 26.55 |
| BLOOMZ-7B | 30.27 |
| Aquila-7B<sup>*</sup> | 25.58 |
| **baichuan-7B** | **34.44** |
<sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
### English Leaderboard
In addition to Chinese, we also tested the model's performance in English.
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300) is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset.
We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation scheme, and the final 5-shot results are as follows:
| Model | Humanities | Social Sciences | STEM | Other | Average |
|----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:|
| LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
| Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 |
| mpt-7B<sup>1</sup> | - | - | - | - | 35.6 |
| ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
| BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
| BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
| moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
| moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
| **baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
The superscript in the Model column indicates the source of the results.
```
0:reimplemented
1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
```
|
LOGQS/ppo-Pyramids
|
LOGQS
| 2023-06-20T20:56:07Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-20T20:55:26Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LOGQS/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mcpotato/potato-bot
|
mcpotato
| 2023-06-20T20:55:34Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-21T14:25:20Z |
---
title: zZWyoMVOdIUJunWk
emoji: 😂
colorFrom: blue
colorTo: green
sdk: docker
pinned: false
license: mit
---
|
dnihil/TypeB
|
dnihil
| 2023-06-20T20:47:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-20T20:33:32Z |
TypeB
This is a merge model of a couple models and loras, I can't remember each one but I believe the base was made of Mistoon.
This was going to be a personal model, but the frens on Baest were interested in it, so everyone gets it.
It works well with the Anything VAE, if your generations happen to be bland or colorless or have purple artifacts.
|
anavarro78/finetuning-sentiment-model-3000-samples
|
anavarro78
| 2023-06-20T20:46:52Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T03:32:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2823
- Accuracy: 0.904
- F1: 0.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KelvinHu/ppo-LunarLander-v2
|
KelvinHu
| 2023-06-20T20:29:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T20:21:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 295.04 +/- 22.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mgmeskill/CartPole-v1
|
mgmeskill
| 2023-06-20T20:25:04Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T20:24:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mrsrincewind/digits3
|
mrsrincewind
| 2023-06-20T20:23:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-20T19:48:00Z |
language:
- ru
metrics:
- accuracy
library_name: keras
---
Карточка НС должна содержать:
1. Описание задачи которую выполняет НС;
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации;
3. Общее количество обучаемых параметров НС;
4. Используемый алгоритмы оптимизации и функция ошибки;
5. Размеры тренировочного, валидационного и тестового датасетов;
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах
# Описание задачи
Дан датасет mnist по входному изображению определить остаток от деления этой цифры
на 3;
# Послойная архитектура НС

# Общее количество обучаемых параметров НС

# Используемые алгоритмы оптимизации и функция ошибки
1. Использованная **функция ошибки** - **категориальная кроссэнтропия** для повышения качества нейронной сети
2. Использованный **алгоритм оптимизации - adam** из Keras

# Размеры тренировочного, валидационного и тестового датасетов:
1. Размер тренировочного датаеста: **60.000** фото 28х28
2. Размер валидационного датасета: 10% от тренировочного = **6.000** фото 28х28
3. Размер тестового датасета: **10.000** фото 28х28
# Результаты обучения модели

|
lokach/my_model
|
lokach
| 2023-06-20T20:18:47Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-20T14:50:29Z |
# 1. Описание задачи
---
Нейронная сеть получает на вход фото какой то вещи(например футболка или кросовки) и на основе полученных данных она рисует такую же вещь.
К сожалению, получить нормальный результат у меня не получилось. По какой то причине нейронная сеть на выходе всегда рисовала футболку.
# 2. Изображения архитектуры
---
Нейронная сеть состоит из двух других нейронных сетей.
1. Сеть, которую написал я сам 
2. Данная мне, уже обученная сеть
# 3. Общее количество обучаемых параметров
---

# 4. Алгоритмы оптимизации и функция ошибки
---
В коде используется алгоритм оптимизации adam и функция ошибки Sparse Categorical Crossentropy.
# 5. Размеры тренировочного, валидационного и тестового датасетов
---

|
Pogonyalov47/Python_Task_3_Variant
|
Pogonyalov47
| 2023-06-20T20:14:44Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"ru",
"region:us"
] | null | 2023-06-20T19:57:35Z |
---
language:
- ru
---
# Задание:
Дан датасет mnist по входному изображению определить остаток от деления этой цифры на 3
# Реализация:
## Описание
На вход нейронной сети поступает картинка числа. Она имеет размерность 28x28, есть список правильных ответов, который мы должны преобразовать, чтобы нейронная сеть распознавала числа % 3.
## Изображение послойной архитектуры

## Общее количество обучаемых параметров сети

## Используемые алгоритмы ошибок и оптимизации:
Были использованы функция ошибок - **категориальная энтропия** и стандартная функция оптимизации - **adam**

## Размеры тренировочного, валидационного и тестового датасетов
Исходный датасет был разделен на 3 части:
1. Тренировочный 54.000 изображений
2. Валидационный 6.000 изображений
3. Тестовый 10.000 изображений
## Результаты
На скриншоты можно посмотреть результаты обучения нейронной сети:

|
Moniksama/finetuned-bert-covid-tweets
|
Moniksama
| 2023-06-20T20:10:18Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-17T13:58:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: finetuned-bert-covid-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-covid-tweets
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2862
- Accuracy: 0.9175
- F1-score: 0.9177
- Precision: 0.9179
- Recall: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
| 0.1911 | 1.0 | 513 | 0.2740 | 0.9176 | 0.9176 | 0.9185 | 0.9176 |
| 0.128 | 2.0 | 1026 | 0.2689 | 0.9262 | 0.9264 | 0.9267 | 0.9262 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
CyberTea/neuro5_fashion_mnist
|
CyberTea
| 2023-06-20T20:09:15Z | 6 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T15:34:05Z |
# Распознавание класса изображений на датасете mnist.
# Задача НС
Модель распознаёт к какому классу из 3 (0 - одежда, 1 - обувь, 2 - сумка) относится изображение
## Изображение послойной архитектуры:

## Общее количество обучаемых параметров
Обучаемых параметров: 16,547
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `categorical_crossentropy`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 0.002967413514852524
Train Accuracy: 0.9993500113487244
Test Loss: 0.016184156760573387
Test Accuracy: 0.9958000183105469
Validation Loss: 0.016184156760573387
Validation Accuracy: 0.9958000183105469
## Результаты работы программы и нейросети:

|
AravindVadlapudi02/vit-base-patch16-224-in21k-vit-final-processed-71
|
AravindVadlapudi02
| 2023-06-20T20:08:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T20:08:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Igor2004/zadanie4
|
Igor2004
| 2023-06-20T20:05:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-20T19:36:06Z |
Владимир Викторович, здравствуйте, студент ДРПК-47 Семёнов Игорь. Я уехал работать вожатым от РСО, и мне необходим хотя бы минимальный зачетный балл по вашему предмету.
Моя нейросеть может выводить разные сгенерированные картинки предметов 28х28 из fashion_mnist, когда мы меняем цифры. К сожалению, в виду нехватки времени не могу реализовать большего.
Заранее спасибо
|
Rage4/Gasilin_var8
|
Rage4
| 2023-06-20T20:05:17Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T19:14:31Z |
1. Нейронная сеть генерирует цифры похожие на цифры из датасета mnist.
2. 
3. Общее количество обучаемых параметров НС: 54160
4. Используемый алгоритмы оптимизации и функция ошибки: adam и categorical_crossentropy.
5. Размеры тренировочного, валидационного и тестового датасетов: тренировочный: 60000, валидационный: 10000, тестовый: 10000
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах: тренировочный: loss: 2554.3391, accuracy: 0.7287; валидационный: loss: 2521.8169, accuracy: 0.7296; тестовый: loss: 2570.7542, accuracy: 0.7292
|
Bobiiii/FinalNumRemindByThree
|
Bobiiii
| 2023-06-20T19:45:41Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:27:02Z |
# Описание модели
Модель принимает цифры на основе датасета `mnist` определяет число и выводит остаток от деления этого числа на 3.
Модель состоит из двух частей.
Первая распознает число и передает это значение в вторую часть модели.
Вторая делит полученный результат число на три.
Выходной результат выглядит как массив их трех элементов.
Индекс максимального аргумента и будет соответствовать нужному значению.
Например: `[0,0,1] - 2`
Пример работы модели:

Как видим модель неплохо справляется с поставленной задачей и хорошо предсказывает результат.
# Архитектруа модели

# Summary
Model: "ImageToRemainder"
| Layer (type) | Output Shape | Param # |
|-----------------------------|------------------|---------|
| MnistImg (InputLayer) | [(None, 28, 28)] | 0 |
| ImgToNum (Functional) | (None, 10) | 124310 |
| NumToRemainder (Functional) | (None, 3) | 155 |
Total params: `124,465`
Trainable params: `124,465`
Non-trainable params: `0`
# Используемый алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации: `adam`
Функция ошибки: `categorical_crossentropy`
Валидация - `validation_split=0.3`
# Размеры тренировочного, валидационного и тестового датасетов
Train shape: `42000`
Validation shape: `18000`
Test shape: `10000`
# Результаты обучения модели: loss и accuracy.
История обучения `accuracy` и `loss` для `train` и `validation`


Проверка после обучения на данных из `test`:
- Test loss: `0.07424477487802505`
- Test accuracy: `0.9800999760627747`
|
mariabashkeva/Exam
|
mariabashkeva
| 2023-06-20T19:40:20Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:38:23Z |
1. Описание задачи которую выполняет НС;
Дан датасет mnist постройте автоэнкодер принимающий на вход изображение цифры и
создающий её же изображение на выходе;
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации;

3. Общее количество обучаемых параметров НС;
131457
5. Используемый алгоритмы оптимизации и функция ошибки;
adam, mean_squared_error
6. Размеры тренировочного, валидационного и тестового датасетов;
Тренировочный: 60000
Тестовый: 10000
8. Результаты обучения модели: loss и accuracy на всех трёх датасетах.

|
imaginepeach/hochuzachetpls
|
imaginepeach
| 2023-06-20T19:39:01Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T17:59:16Z |
---
datasets:
- mnist
library_name: keras
---
1. Автоэнкодер для сжатия и и восстановления изображений, в данном примере изображений цифр
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации (энкодер и декодер, названия функций активаций с цифрами из-за особенностей функции plot_model)

3. Общее количество обучаемых параметров НС
Общее количество обучаемых параметров НС: 489363
total_params = autoenc.count_params()
print("Общее количество обучаемых параметров НС:", total_params)
5. Используемый алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации: adam
Функция ошибки: mean_squared_error (среднеквадратичная ошибка)
5. Размеры тренировочного, валидационного и тестового датасетов;
Размер тренировочного датасета: 48000
Размер валидационного датасета: 12000
Размер тестового датасета: 10000
(train_size = len(train_x)
val_size = len(val_x)
test_size = len(test_x))
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах
Loss на тренировочном датасете: 0.033324986696243286
Accuracy на тренировочном датасете: 0.009708333760499954
Loss на валидационном датасете: 0.03425576910376549
Accuracy на валидационном датасете: 0.0100833335891366
Loss на тестовом датасете: 0.034134093672037125
Accuracy на тестовом датасете: 0.011500000022351742


Пример работы

|
asdf343/ppo-LunarLander-v2
|
asdf343
| 2023-06-20T19:32:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T19:31:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.41 +/- 19.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tanmayyyj/ppo-PyramidsRND
|
tanmayyyj
| 2023-06-20T19:20:23Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-20T19:20:19Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tanmayyyj/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Au3609/Exam
|
Au3609
| 2023-06-20T19:16:03Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:53:51Z |
Дан датасет mnist по входному изображению определить цифру
Total params: 118,282
Используемый алгоритм оптимизации: Adam. Функция ошибки: разреженная категориальная кросс энтропия

LOSS

ACCURACY

|
ronig/pdb_bpe_tokenizer_1024_mlm
|
ronig
| 2023-06-20T18:52:48Z | 0 | 0 | null |
[
"en",
"dataset:ronig/pdb_sequences",
"license:mit",
"region:us"
] | null | 2023-03-25T05:24:09Z |
---
language: en
license: mit
datasets:
- ronig/pdb_sequences
---
# PDB Protein BPE Tokenizer
A protein sequence tokenizer trained on [PDB Sequences](https://huggingface.co/datasets/ronig/pdb_sequences) with `vocabulary size = 1024`
|
climatebert/distilroberta-base-climate-detector
|
climatebert
| 2023-06-20T18:52:03Z | 26,073 | 15 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:climatebert/climate_detection",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
datasets:
- climatebert/climate_detection
language:
- en
metrics:
- accuracy
---
# Model Card for distilroberta-base-climate-detector
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our [climatebert/climate_detection](https://huggingface.co/climatebert/climate_detection) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_detection"
model_name = "climatebert/distilroberta-base-climate-detector"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
climatebert/distilroberta-base-climate-tcfd
|
climatebert
| 2023-06-20T18:51:43Z | 5,747 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"climate",
"en",
"dataset:climatebert/tcfd_recommendations",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
datasets:
- climatebert/tcfd_recommendations
language:
- en
metrics:
- accuracy
tags:
- climate
---
# Model Card for distilroberta-base-climate-tcfd
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)).
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our [climatebert/tcfd_recommendations](https://huggingface.co/climatebert/tcfd_recommendations) dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/tcfd_recommendations"
model_name = "climatebert/distilroberta-base-climate-tcfd"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
AravindVadlapudi02/vit-base-patch16-224-in21k-vit-final-original-69
|
AravindVadlapudi02
| 2023-06-20T18:48:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T18:47:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Dilp/Taxi-v3
|
Dilp
| 2023-06-20T18:45:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T18:45:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dilp/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SouhilOuchene/AC_pre_Part2
|
SouhilOuchene
| 2023-06-20T18:41:47Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-20T18:40:43Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# SouhilOuchene/AC_pre_Part2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("SouhilOuchene/AC_pre_Part2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
mgmeskill/SpaceInvadersNoFrameskip-v4
|
mgmeskill
| 2023-06-20T18:37:34Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T02:13:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 789.50 +/- 316.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mgmeskill -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mgmeskill -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mgmeskill
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 50000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 5000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
destrat/zachet
|
destrat
| 2023-06-20T18:29:48Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:40:01Z |
# Распознавание класса цифр на датасете mnist.
# Задача НС
Модель генерирует цифру похожую на цифру из датасета mnist
## Изображение послойной архитектуры:

## Общее количество обучаемых параметров
Обучемых параметров: 54,160
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `categorical_crossentropy`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 2511.731201171875
Train Accuracy: 0.7256483435630798
Test Loss: 2534.3447265625
Test Accuracy: 0.7262243628501892
Validation Loss: 2534.3447265625
Validation Accuracy: 0.7262243628501892
|
Saed2023/layoutlmv3-finetuned-Algo_427Images
|
Saed2023
| 2023-06-20T18:24:04Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-18T18:24:52Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-Algo_427Images
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-Algo_427Images
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Precision: 0.9937
- Recall: 0.9964
- F1: 0.9950
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.25 | 100 | 0.0082 | 0.9505 | 0.9367 | 0.9435 | 0.9983 |
| No log | 2.5 | 200 | 0.0024 | 0.9883 | 0.9901 | 0.9892 | 0.9997 |
| No log | 3.75 | 300 | 0.0020 | 0.9883 | 0.9919 | 0.9901 | 0.9997 |
| No log | 5.0 | 400 | 0.0016 | 0.9910 | 0.9928 | 0.9919 | 0.9998 |
| 0.0301 | 6.25 | 500 | 0.0015 | 0.9910 | 0.9928 | 0.9919 | 0.9998 |
| 0.0301 | 7.5 | 600 | 0.0014 | 0.9928 | 0.9946 | 0.9937 | 0.9998 |
| 0.0301 | 8.75 | 700 | 0.0013 | 0.9928 | 0.9946 | 0.9937 | 0.9998 |
| 0.0301 | 10.0 | 800 | 0.0013 | 0.9937 | 0.9964 | 0.9950 | 0.9999 |
| 0.0301 | 11.25 | 900 | 0.0013 | 0.9928 | 0.9946 | 0.9937 | 0.9998 |
| 0.002 | 12.5 | 1000 | 0.0013 | 0.9937 | 0.9964 | 0.9950 | 0.9999 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bananaws/Python_Task6
|
Bananaws
| 2023-06-20T18:20:11Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"ru",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T14:09:14Z |
---
datasets:
- mnist
language:
- ru
metrics:
- accuracy
library_name: keras
---
1. Описание задачи которую выполняет НС.
Вариант 6. Используя датасет mnist был построен автоэнкодер, принимающий на вход изображение цифры и
создающий её же изображение на выходе.
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации.

3. Общее количество обучаемых параметров НС.
Оно составляет 131457, можно увидеть в коде.
4. Используемый алгоритмы оптимизации и функция ошибки.
Алгоритм оптимизации - adam, функция ошибки - mse (mean_squared_error).
5. Размеры тренировочного, валидационного и тестового датасетов.
Тренировочный равен 48 000.
Тестовый равен 10 000.
Валидационный равен 12 000 (то есть 20% от изначального 60 000 тренировочного датасета)
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах.

Для тестового датасета loss: 0.0339 и accuracy: 0.0097
Результат обучения:

|
gaiamolinaro/ppo-LunarLander-v2
|
gaiamolinaro
| 2023-06-20T18:16:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T18:15:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.58 +/- 35.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArturStepanenko/pythonV6
|
ArturStepanenko
| 2023-06-20T18:15:24Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"code",
"ru",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T16:45:08Z |
---
datasets:
- mnist
language:
- ru
metrics:
- accuracy
library_name: keras
tags:
- code
---
1. Модель НС, принимающая на вход изображение цифры и создающая её же изображение на выходе
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации
3. Общее количество обучаемых параметров НС: 131457
4. Используемый алгоритм оптимизации - adam, функция ошибки - mse
5. Размеры тренировочного, валидационного и тестового датасетов: 48000, 6000 и 6000 соответственно
6. Результаты обучения модели: loss и accuracy на тренировочном и валидационном датасете
Метрики loss и accuracy на тестовом датасете с помощью evluate:
Результаты обучения модели:
|
zadhart/BLOOM-3b-Lora_PlantsHelper
|
zadhart
| 2023-06-20T18:14:51Z | 0 | 0 | null |
[
"text-generation",
"pt",
"dataset:zadhart/PlantsChatbotPTBR",
"license:mit",
"region:us"
] |
text-generation
| 2023-05-17T18:43:55Z |
---
license: mit
datasets:
- zadhart/PlantsChatbotPTBR
language:
- pt
pipeline_tag: text-generation
---
|
Aleksandra131325425/zachet_python_3
|
Aleksandra131325425
| 2023-06-20T18:12:33Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T17:55:45Z |
---
library_name: keras
---
Модель для распознования цифр выдающая результаты %3 от чисел, которая была натренерованна на наборе данных mnist

Общее количество обучаемых параметров НС равно 209,826

В данной работе я воспользовалась функцией потерь categorical_crossentropy, которая используется для классификации с несколькими классами.
В качестве оптимизатора я воспользовалась adam.
Так как в данной работе я использую Mnist, поэтому тестовая = 10 000, валидационная = 12 000 и тренировочная = 48 000 данных
Ниже показаны картинки которые отражают показатели loss и accuracy на всех трех датасетах
accuracy и loss для тестовой выборки

Точность accuracy и loss для валидационной и обучающей

|
demid28/Exam_model
|
demid28
| 2023-06-20T18:07:06Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"ru",
"region:us"
] | null | 2023-06-20T17:34:49Z |
---
language:
- ru
metrics:
- accuracy
library_name: keras
---
# Модель для распознования цифр
## Обученная на датасете "mnist"
## Изображение послойной архитектуры:

### Предназначение модели:
Данная модель служит для определения цифр по входному изображению.
Общее количество обучаемых параметров равно 431018.
Алгоритм оптимизации: adam
Функция ошибки: categorical_crossentropy
### Размеры датасетов:
Тренировочного: 45 000
Валидационного: 15 000
Тестового: 10 000
### Размеры loss и accuracy на датасетах:
На тренировочном: loss: 0.0541 - accuracy: 0.9854
На валидационном: loss: 0.0460 - accuracy: 0.9894
На тестовом: loss: 0.0320 - accuracy: 0.9906
|
Disskretnost/neuro9_ashion_mnist
|
Disskretnost
| 2023-06-20T18:06:29Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:47:26Z |
# Распознавание класса изображений на датасете mnist.
# Задача НС
Генерация изображения похожего на предмет из набора fashion_mnist
## Изображение послойной архитектуры:
### Полная нейросеть:

### Encoder:

## Общее количество обучаемых параметров
Обучаемых параметров: 54,410
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `mse`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 0.06076487898826599
Train Accuracy: 0.49122941493988037
Test Loss: 0.06062548980116844
Test Accuracy: 0.4893147945404053
Validation Loss: 0.06062548980116844
Validation Accuracy: 0.4893147945404053
## Результаты работы программы и нейросети:

|
Peeepy/open-llama-13b-4bit-128g-GPTQ
|
Peeepy
| 2023-06-20T17:50:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T15:59:38Z |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
This is a 4bit quantised [open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct) using Oobabooga's [GPTQ for LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa).
Original model readme is below.
# VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
<b> NOTE </b> : The model was trained using the Alpaca prompt template \
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer\
<b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
## License
- <b>Commercially Viable </b>
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-13b-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
```
## Finetuning details
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
## Evaluation
<B>TODO</B>
|
allenai/open-instruct-flan-v2-7b
|
allenai
| 2023-06-20T17:50:44Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:05:04Z |
---
language:
- en
---
# Open-Instruct Flan V2 7B
This model is a 7B LLaMa model finetuned on the Flan V2 dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 45.4 | 47.1 | 3.5 | 13.0 | 38.6 | 36.1 | 45.0 | 8.3 | 9.6 | 12.9 | 4.6 | 22.4 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
|
allenai/open-instruct-stanford-alpaca-13b
|
allenai
| 2023-06-20T17:50:22Z | 19 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:19:57Z |
---
datasets:
- tatsu-lab/alpaca
language:
- en
---
# Open-Instruct Stanford Alpaca 13B
This model is a 13B LLaMa model finetuned on the Stanford Alpaca dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 45.1 | 47.1 | 6.0 | 8.0 | 35.0 | 34.5 | 32.8 | 7.8 | 15.7 | 27.6 | 28.7 | 26.4 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
allenai/open-instruct-unnatural-instructions-7b
|
allenai
| 2023-06-20T17:50:10Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:mrm8488/unnatural-instructions",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2212.09689",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:08:23Z |
---
datasets:
- mrm8488/unnatural-instructions
language:
- en
---
# Open-Instruct Unnatural Instructions 7B
This model is a 7B LLaMa model finetuned on the Unnatural Instructions dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 42.9 | 38.1 | 3.5 | 5.0 | 31.4 | 30.0 | 36.3 | 6.5 | 10.3 | 19.8 | 8.2 | 20.0 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
|
allenai/open-instruct-unnatural-instructions-13b
|
allenai
| 2023-06-20T17:50:00Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:mrm8488/unnatural-instructions",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2212.09689",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:41:42Z |
---
datasets:
- mrm8488/unnatural-instructions
language:
- en
---
# Open-Instruct Unnatural Instructions 13B
This model is a 13B LLaMa model finetuned on the Unnatural Instructions dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 46.2 | 45.7 | 5.0 | 7.5 | 37.6 | 32.8 | 39.3 | 9.1 | 13.9 | 24.8 | 10.9 | 23.6 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
|
allenai/tulu-13b
|
allenai
| 2023-06-20T17:48:04Z | 27 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:46:18Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- sahil2801/CodeAlpaca-20k
language:
- en
---
# Tulu 13B
This model is a 13B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 49.2 | 51.8 | 5.0 | 36.5 | 41.3 | 42.8 | 46.1 | 9.2 | 21.3 | 35.0 | 53.9 |37.2 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
gsn-codes/poca-SoccerTwos
|
gsn-codes
| 2023-06-20T17:47:56Z | 35 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-20T17:42:24Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gsn-codes/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
allenai/tulu-7b
|
allenai
| 2023-06-20T17:47:54Z | 65 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:11Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- sahil2801/CodeAlpaca-20k
language:
- en
---
# Tulu 7B
This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
allenai/open-instruct-human-mix-7b
|
allenai
| 2023-06-20T17:46:37Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:39Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Human-mix 7B
This model is a 7B LLaMa model finetuned on a mixture of human-authored datasets (FLAN V2, CoT, Dolly, and Open Assistant 1). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 46.2 | 48.0 | 4.5 | 26.5 | 35.6 | 34.8 | 42.2 | 7.7 | 9.4 | 20.2 | 29.4 | 27.8 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-gpt4-alpaca-7b
|
allenai
| 2023-06-20T17:46:10Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.03277",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:16Z |
---
language:
- en
---
# Open-Instruct GPT-4 Alpaca 7B
This model is a 7B LLaMa model finetuned on the GPT-4 Alpaca dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 42.6 | 38.3 | 6.5 | 10.0 | 28.5 | 32.3 | 23.6 | 5.8 | 13.2 | 25.0 | 57.0 | 28.3 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
|
allenai/open-instruct-cot-7b
|
allenai
| 2023-06-20T17:45:42Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:05:59Z |
---
language:
- en
---
# Open-Instruct CoT 7B
This model is a 7B LLaMa model finetuned on the CoT dataset (a subset of Flan v2). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 41.3 | 42.5 | 6.5 | 27.5 | 33.7 | 31.3 | 44.4 | 8.5 | 7.4 | 17.3 | 4.2 | 22.4 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
|
allenai/open-instruct-baize-7b
|
allenai
| 2023-06-20T17:44:44Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.01196",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:06Z |
---
language:
- en
---
# Open-Instruct Baize 7B
This model is a 7B LLaMa model finetuned on the Baize dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 40.3 | 38.6 | 3.5 | 5.5 | 30.6 | 32.4 | 29.8 | 7.9 | 12.2 | 23.8 | 23.5 | 22.6 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
allenai/open-instruct-baize-13b
|
allenai
| 2023-06-20T17:44:30Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.01196",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:24:03Z |
---
language:
- en
---
# Open-Instruct Baize 13B
This model is a 13B LLaMa model finetuned on the Baize dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 43.5 | 41.5 | 4.5 | 8.5 | 35.3 | 36.7 | 33.9 | 9.0 | 14.5 | 27.3 | 28.7 | 26.0 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
allenai/open-instruct-oasst1-13b
|
allenai
| 2023-06-20T17:43:52Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:17:31Z |
---
datasets:
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Open Assistant 13B
This model is a 13B LLaMa model finetuned on the Open Assistant dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 43.1 | 34.0 | 5.0 | 16.0 | 34.8 | 38.5 | 38.3 | 9.2 | 14.1 | 31.8 | 53.5 | 31.1 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-dolly-13b
|
allenai
| 2023-06-20T17:43:06Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:16:56Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
---
# Open-Instruct Dolly 13B
This model is a 13B LLaMa model finetuned on the Dolly dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 45.3 | 44.7 | 6.0 | 17.0 | 31.4 | 26.0 | 46.8 | 12.4 | 13.4 | 31.4 | 16.2 | 25.5 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
|
fabgraziano/bert-italian-xxl-cased_squad-it_v1
|
fabgraziano
| 2023-06-20T17:41:05Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"it",
"dataset:squad_it",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-20T17:16:58Z |
---
datasets:
- squad_it
language:
- it
metrics:
- f1
- exact_match
pipeline_tag: question-answering
tags:
- question-answering
---
# bert-italian-xxl-cased_squad-it_v1
BERT-based model for extractive Question-Answering.
## Model Details
### Model Description
- **Developed by:** fabgraziano
- **Model type:** Question-Answering
- **Language(s) (NLP):** Italian
- **Finetuned from model:** [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased)
- **Datased used:** [squad_it](https://huggingface.co/datasets/squad_it)
## Training Details
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision
- **MAX_LENGTH:** 384
- **STRIDE:** 128
- **BATCH_SIZE:** 8
- **LEARNING_RATE:** 2e-5
- **EPOCHS:** 3
## Evaluation
#### Metrics
- **EM:** 63.87
- **F1:** 75.34
|
AnnieEl/distilgpt2-finetuned-wikitext2
|
AnnieEl
| 2023-06-20T17:38:58Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T16:01:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7736 | 1.0 | 2110 | 3.6761 |
| 3.6673 | 2.0 | 4220 | 3.6560 |
| 3.6063 | 3.0 | 6330 | 3.6522 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alexandrualexandru/final-3.0-t5-base-2023-06-20_13-18
|
alexandrualexandru
| 2023-06-20T17:31:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T13:21:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: final-3.0-t5-base-2023-06-20_13-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-3.0-t5-base-2023-06-20_13-18
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0913
- Gen Len: 19.0
- Bertscorer-p: 0.5070
- Bertscorer-r: 0.0536
- Bertscorer-f1: 0.2705
- Sacrebleu-score: 4.7246
- Sacrebleu-precisions: [82.26851609027145, 72.56818398298178, 64.03031654034775, 58.101657218801705]
- Bleu-bp: 0.0688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:-----------------------------------------------------------------------------:|:-------:|
| 0.137 | 1.0 | 10382 | 0.1146 | 19.0 | 0.4908 | 0.0444 | 0.2581 | 3.8066 | [81.64981564768446, 70.67489114658926, 61.40689811921132, 54.44849682509699] | 0.0574 |
| 0.0937 | 2.0 | 20764 | 0.0913 | 19.0 | 0.5070 | 0.0536 | 0.2705 | 4.7246 | [82.26851609027145, 72.56818398298178, 64.03031654034775, 58.101657218801705] | 0.0688 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Dugoss/qwerty
|
Dugoss
| 2023-06-20T17:30:10Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T16:23:31Z |
Построили модель и натренировали ее на большей части данных с цифрами так, чтобы можно было передавать модели фотографии с цифрами размером 28×28 пикселей и получать на выходе значение этой цифры.

Для построения модели использовали обычные полносвязанные слои с разным количеством узлов. В качестве функции активации на входном и промежуточных слоях использовали функцию relu. На выходном слое в качестве функции активации определили сигмоиду

В качестве оптимайзера был выбран Adam.
В массиве X_train содержится 60000 изображений, ровно столько же содержится и в массиве y_train с соответствующими метками. Тестовые данные X_test и y_test содержат по 10000 элементов.
Epoch 1/5
96/96 [==============================] - 43s 429ms/step - loss: 0.1776 - binary_accuracy: 0.9385 - val_loss: 0.0580 - val_binary_accuracy: 0.9812
Epoch 2/5
96/96 [==============================] - 40s 417ms/step - loss: 0.0492 - binary_accuracy: 0.9838 - val_loss: 0.0376 - val_binary_accuracy: 0.9880
Epoch 3/5
96/96 [==============================] - 40s 419ms/step - loss: 0.0370 - binary_accuracy: 0.9881 - val_loss: 0.0347 - val_binary_accuracy: 0.9892
Epoch 4/5
96/96 [==============================] - 41s 423ms/step - loss: 0.0327 - binary_accuracy: 0.9893 - val_loss: 0.0327 - val_binary_accuracy: 0.9896
Epoch 5/5
96/96 [==============================] - 41s 427ms/step - loss: 0.0295 - binary_accuracy: 0.9905 - val_loss: 0.0312 - val_binary_accuracy: 0.9903
В результате обучения модели на 5 эпохах был замечен очень низкий loss и высокая точность!
|
danielpolok/h-chatbot-intent-classifier
|
danielpolok
| 2023-06-20T17:12:27Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-08T19:16:18Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# danielpolok/h-chatbot-intent-classifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("danielpolok/h-chatbot-intent-classifier")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Nika7664/Mnist_generator
|
Nika7664
| 2023-06-20T17:11:02Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T16:59:58Z |
---
datasets:
- mnist
metrics:
- accuracy
library_name: keras
---
## Задание:
Дан датасет mnist и обученная нейронная сеть. Используйте их для генерации
изображения похожего на цифру из набора mnist . Веса нейронной сети данной по заданию
не должны быть изменены в процессе дообучения.

## Общее количество обучаемых параметров: 54 160
## Алгоритмы
Алгоритм оптимизации - Adam
Функция потерь - Категориальная кроссэнтропия - categorical_crossentropy
## Размер датасета
Обучающий - 60000
Тестовы - 60000
Валидационный - 12000
## Результаты работы:
Функция потерь: 2625.412109375
Точность: 0.7566536664962769
## Ссылка на Colab:
https://colab.research.google.com/drive/18kyzm2mHhXyfZjEIy684sssv_SjV5zf2?usp=sharing
|
Andrey13rasfasf/task
|
Andrey13rasfasf
| 2023-06-20T17:08:20Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T16:25:39Z |
---
library_name: keras
---
Характеристики НС:
Архитектура: автоэнкодер имеет два скрытых слоя, первый из которых имеет 128 нейронов, а второй слой имеет 64 нейрона. Выходной слой имеет 784 нейрона, которые соответствуют размеру исходного изображения MNIST.
Функции активации: автоэнкодер использует "ReLU" функцию активации для скрытых слоев и "sigmoid" - для выходного слоя.
Функция потерь: НС использует метод среднеквадратической ошибки (MSE) в качестве функции потерь, что помогает минимизировать ошибку при восстановлении исходного изображения из сжатого.
Алгоритм оптимизации: НС используется алгоритм оптимизации стохастический градиентный спуск с небольшим шагом обучения (learning rate).
Размер и тип данных: НС обрабатывает изображения MNIST размером 28x28, которые являются черно-белыми (одноканальными).
Временные характеристики: Количество эпох обучения 10 и размер пакета данных 128
Количество нейронов и размер НС: имеет 97280 обучаемых параметров, скрытые слои содержат 16512 и 8256 параметров соответственно, выходной слой - 50240 параметров.

|
TheBloke/baichuan-vicuna-7B-GGML
|
TheBloke
| 2023-06-20T17:06:45Z | 0 | 16 | null |
[
"arxiv:2306.04751",
"license:other",
"region:us"
] | null | 2023-06-20T13:01:31Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Fire Balloon's Baichuan Vicuna 7B GGML
These files are GGML format model files for [Fire Balloon's Baichuan Vicuna 7B](https://huggingface.co/fireballoon/baichuan-vicuna-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-vicuna-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/baichuan-vicuna-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
## Prompt template: Vicuna 1.1
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| baichuan-vicuna-7b.ggmlv3.q2_K.bin | q2_K | 2 | 3.02 GB | 5.52 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| baichuan-vicuna-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.76 GB | 6.26 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-vicuna-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.45 GB | 5.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-vicuna-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 3.11 GB | 5.61 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.94 GB | 6.44 GB | Original llama.cpp quant method, 4-bit. |
| baichuan-vicuna-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.38 GB | 6.88 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| baichuan-vicuna-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.26 GB | 6.76 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| baichuan-vicuna-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 4.01 GB | 6.51 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.81 GB | 7.31 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| baichuan-vicuna-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.25 GB | 7.75 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| baichuan-vicuna-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.98 GB | 7.48 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| baichuan-vicuna-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.85 GB | 7.35 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.74 GB | 8.24 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| baichuan-vicuna-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.44 GB | 9.94 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m baichuan-vicuna-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Fire Balloon's Baichuan Vicuna 7B
# baichuan-vicuna-7b
baichuan-vicuna-7b is a chat model supervised finetuned on vicuna sharegpt data.
- The foundation model is [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B), which is a large-scale pre-training model developed by Baichuan Intelligence allowing for commercial purposes.
- The finetuning data includes [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), mixed with [COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) and [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), which are aimed to improve the model's reasoning and coding ability (the data mixing strategy is inspired by [TULU](https://arxiv.org/abs/2306.04751)).
- The training code: https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, which is based on [FastChat](https://github.com/lm-sys/FastChat).
[中文说明](#chinese-model-card)
**[New]** [baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b), baichuan finetuned on both English and Chinese ShareGPT.
# Load model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b")
```
# Inference
Inference with FastChat:
```
python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-7b
```
Inference with Transformers:
```ipython
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
>>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b").half().cuda()
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
>>> prompt = instruction.format("five tips to help with sleep") # user message
>>> generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
'''
1. Create a relaxing bedtime routine, such as winding down with a warm bath or reading a book before bed.
2. Avoid caffeine and alcohol close to bedtime, as they can disrupt sleep.
3. Use blackout curtains or shades to block out light from the outside, which can interfere with your body's natural sleep rhythms.
4. Keep your bedroom cool and dark, and avoid using electronic devices at least an hour before bed.
5. Regular exercise and a healthy diet can also help improve sleep quality.
'''
```
# Test examples on FastChat Eval
Test example on [FastChat Eval](https://github.com/lm-sys/FastChat/blob/main/fastchat/eval/table/question.jsonl)
Please refer to [discussions 2](https://huggingface.co/fireballoon/baichuan-vicuna-7b/discussions/2) for comparative evaluation of Baichuan-vicuna-7b and Vicuna-13b by GPT-4.
```
Q: How can I improve my time management skills?
A: There are many different ways to improve your time management skills. Here are a few strategies that you may find helpful:
1 Prioritize your tasks: Make a list of all the tasks you need to complete and prioritize them based on importance and urgency. This will help you focus on the most important tasks and avoid getting bogged down by less critical items.
2 Use a time management planner: A planner can help you organize your time by breaking down your tasks into smaller, more manageable chunks. This can help you stay focused and on track.
3 Set clear boundaries: Setting clear boundaries between work time and personal time can help you avoid distractions and stay focused on your tasks. This may involve designating certain times of day or days of the week for work and other times for rest and leisure.
4 Take care of your physical and mental health: Taking care of your physical and mental health can help you be more productive and focused. This may involve getting enough sleep, eating a healthy diet, and taking breaks to relax and recharge.
Remember that improving your time management skills takes practice and patience. Don't be discouraged if you make mistakes or experience setbacks. Keep trying and you will eventually develop better time management habits that will help you be more productive and successful.
```
```
Q: Can you help me write a formal email to a potential business partner proposing a joint venture?
A: Sure! Here is a sample formal email that you could use as a template to propose a joint venture with a
potential business partner:
Subject: Proposal for Joint Venture
Dear [Business Partner],
I hope this email finds you well. I am writing to propose a joint venture between our companies. We believe that a collaboration between our companies could bring significant benefits to both of us, and we would like to explore the possibility of forming a partnership.
Our companies have complementary strengths, and we believe that a joint venture could allow us to leverage those strengths to create new products, services, and opportunities. We believe that a joint venture could also provide increased revenue and growth for both of our companies.
In order to explore the possibility of a joint venture, we would like to schedule a call to discuss the details of our proposal. During the call, we would like to discuss the terms of the joint venture, including the specific roles and responsibilities of each partner, the financial terms of the agreement, and any other relevant details.
I would be happy to schedule a call with you at your convenience. In the meantime, please feel free to contact me with any questions or concerns you may have about our proposal.
Sincerely,
[Your Name]
```
```
Q: Implement a binary search algorithm to find a specific element in a sorted array.
A: Here is an implementation of the binary search algorithm to find a specific element in a sorted array:
public static int binarySearch(int[] arr, int target) {
int left = 0;
int right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] < target) {
left = mid + 1;
} else if (arr[mid] > target) {
right = mid - 1;
} else {
return mid;
}
}
return -1; // target not found
}
Here is how the binary search algorithm works:
1 Initialize two pointers, left and right, to the beginning and end of the array, respectively.
2 While the left pointer is less than or equal to the right pointer, perform the following steps:
1 Calculate the middle index, mid, by dividing the left pointer and the right pointer.
2 Check if the middle element, arr[mid], is less than or equal to the target value.
3 If the middle element is less than the target value, update the left pointer to mid + 1.
4 If the middle element is greater than the target value, update the right pointer to mid - 1.
5 If the middle element is equal to the target value, the target element has been found, so return the middle index.
3 If the target element is not found, return -1.
This algorithm has a runtime complexity of O(log n) and a space complexity of O(1), making it an efficient way to find a specific element in a sorted array.
```
---
# Chinese model card
baichuan-vicuna-7b是在vicuna sharegpt数据上全参数微调的对话模型。
- 基座模型是[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),由百川智能开发的可商用大规模预训练模型。
- 微调数据包括[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json),混合一定比例的[COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)和[Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions)数据以提升模型的推理和代码能力(数据混合策略受到[TULU](https://arxiv.org/abs/2306.04751)研究成果的启发)。
- 训练代码:https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, 代码基于[FastChat](https://github.com/lm-sys/FastChat)。
# Test examples on Ziyan Eval
在[Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)上的测试例子
```
Q: 鸿门宴是什么意思
A: 鸿门宴是一个汉语成语,意思是指在公元前206年,项羽、刘邦两军在鸿门(今陕西临潼区东北)一带展开的一场“生死对决”。当时,项羽的叔父项伯在会见刘邦的手下时,将刘邦已经逃走的情报透露给项羽,使得刘邦免于被项羽所杀。项羽因此怀恨在心,准备第二天在鸿门设宴,邀请刘邦赴宴,然后袭击刘邦,然而,项羽手下的谋士范增早已看出刘邦的“龙气象” 为刘邦是“沛公刘邦是高祖(刘邦的谥号),他将统率四海,认为杀掉沛公是上策”。宴会期间,项羽的堂弟项庄在席间舞剑,企图将刘邦刺杀。最后,刘邦借故离开,前往韩信大军的营地。鸿门宴上,刘邦因项伯的缘故逃过一劫,但是,他也失项羽议和的机会,为以后楚汉之争埋下了隐患。
Q: 中国目前通用的火警电话是
A: 目前,中国通用的是119火警电话。
```
```
Q: 晚上睡不着应该怎么办
A:失眠是一种症状,而不是一种疾病。治疗失眠的目标是解决其潜在原因。失眠的一些常见原因包括:
*睡眠不足
*压力
*心理健康问题 (焦虑,抑郁等)
*药物 (处方和非处方)
*环境因素 (噪音,光线,温度等)
以下是一些通常可以帮助您睡眠的方法:
*睡觉前4-6个小时避免刺激物 (咖啡因,尼古丁,酒精)
*睡前冥想以减少压力和焦虑
*睡觉前不要看亮屏幕 (电脑,手机,平板电脑等)
*睡觉前至少一个小时不要运动
*睡觉前不要吃食物
*在睡觉前放松并平静下来 (看看书,喝杯牛奶等)
如果这些技术不起作用,或者如果失眠是由药物或其他医学状况引起的,您应该联系您的医生。
```
|
Maksimk04/Digits_autoencoder_mnist
|
Maksimk04
| 2023-06-20T17:04:16Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T15:00:59Z |
---
datasets:
- mnist
---
Данная НС, по сути, является вариационным автоэнкодером (VAE), принимающая на вход изображение 28х28,
возвращая измененное изображение той же самой цифры.
Структура модели:

Общее количество параметров составляет 249247 (124233 для энкодера и 125014 для декодера)
В качестве алгоритма оптимизации был использован стандартный 'adam' из keras.
Функция ошибок - mse (mean squared error).
(В дальнейшем функцию ошибок лучше заменить на специальную для vae)
Размеры тренировочного и тестового датасеты стандартны:
60 тыс. тренировочный
10 тыс. тестовый
В ходе обучения тренировочный разбивается еще и на валидационный в пропорции 1:5 (0.2),
поэтому итоговый размер тренировочного датасета - 48 тыс., валидационный - 12 тыс.
По окончанию обучения (10 эпох):
loss для тренировочной 0.334
loss для валидационной 0.335
loss для тестовой 0.336
В качестве метрики для точности для такого рода НС выбрать что-либо очень сложно,
Была выбрана стандартная метрика accuracy,
которая, соответсвенно, показала не самые информативные результаты:
для тренировочной 0.0092
для валидационной 0.0093
для тестовой 0.0074
Пример генерации сетью цифры 7

|
Yandexxxx/zachet_python
|
Yandexxxx
| 2023-06-20T17:04:05Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T16:13:33Z |
---
library_name: keras
---
Модель для распознования цифр, которая выдает результат %2 от чисел, натренерованна на наборе данных mnist

Общее количество обучаемых параметров НС мы узнаем с помощью .summary и их число равно 209 826
.summary выводит сводку модели машинного обучения, созданной в рамках проекта. Он позволяет увидеть количество слоев, количество нейронов в каждом слое,
функции активации и другие параметры модели. Это помогает определить, какие данные будут входить в модель, какие выходные данные будут получены,
какие параметры будут использоваться и какие функции потерь будут использоваться при обучении модели.

В данной работе я использую функцию потерь categorical_crossentropy, которая используется для классификации с несколькими классами.
В качестве оптимизатора я использую adam, который является одним из наиболее популярных оптимизаторов для обучения нейронных сетей.
Так как в данной работе я использую Mnist, он содержит 70 000 рукописных чисел, при чем 10 000 это тестовая выборка, 60 000 тренировочная, но в ней 20% являются валидационными
поэтому тестовая 10 000, валидационная 12 000 и тренировочная 48 000 данных
Ниже прикреплены картинки который показывают loss, accuracy на всех трех датасетах
Точность accuracy для валидационной и обучающей

Loss для валидационной и обучающей

accuracy и loss для тестовой выборки

|
sridhar1ga/telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
|
sridhar1ga
| 2023-06-20T16:56:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-20T16:15:31Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-telugu-tem-100](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-telugu-tem-100) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7231
- Accuracy: 0.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.95 | 9 | 1.0745 | 0.4575 |
| 1.0888 | 2.0 | 19 | 1.0178 | 0.49 |
| 1.0449 | 2.95 | 28 | 0.9084 | 0.585 |
| 0.9557 | 4.0 | 38 | 0.8364 | 0.6417 |
| 0.888 | 4.95 | 47 | 0.8408 | 0.6417 |
| 0.8509 | 6.0 | 57 | 0.7608 | 0.6817 |
| 0.8185 | 6.95 | 66 | 0.7746 | 0.6817 |
| 0.8092 | 8.0 | 76 | 0.7231 | 0.715 |
| 0.7908 | 8.95 | 85 | 0.7266 | 0.7142 |
| 0.7728 | 9.47 | 90 | 0.7231 | 0.7125 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
pszemraj/long-t5-tglobal-xl-16384-booksci-summary-plos-10k
|
pszemraj
| 2023-06-20T16:49:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:pszemraj/scientific_lay_summarisation-plos-norm",
"license:bsd-3-clause",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T15:35:19Z |
---
license:
- bsd-3-clause
- apache-2.0
tags:
- generated_from_trainer
datasets:
- pszemraj/scientific_lay_summarisation-plos-norm
metrics:
- rouge
model-index:
- name: long-t5-tglobal-xl-16384-book-summary-scientific_lay_summarisation-plos-norm-16384-summ-v1
results:
- task:
name: Summarization
type: summarization
dataset:
name: pszemraj/scientific_lay_summarisation-plos-norm
type: pszemraj/scientific_lay_summarisation-plos-norm
split: validation
metrics:
- name: Rouge1
type: rouge
value: 44.3203
inference: False
---
# long-t5-tglobal-xl-16384-booksci-summary-plos-10k
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-xl-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) on the pszemraj/scientific_lay_summarisation-plos-norm dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5041
- Rouge1: 44.3203
- Rouge2: 11.0576
- Rougel: 22.7584
- Rougelsum: 40.1462
- Gen Len: 256.66
## Model description
Another test of further fine-tuning booksum-based models, this one fine-tuned on the PLOS subset of lay-summaries for about 10k examples input, to make it roughly equivalent to [this checkpoint](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-booksci-summary-v1) fine-tuned on the ELIFE subset for two epochs (also around 10k examples).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 165
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7715 | 0.28 | 350 | 1.5310 | 43.4729 | 10.4616 | 22.1928 | 39.505 | 260.87 |
| 1.9307 | 0.56 | 700 | 1.5102 | 44.1634 | 10.9336 | 22.3896 | 40.2939 | 253.58 |
| 1.2981 | 0.84 | 1050 | 1.5046 | 44.2728 | 10.8455 | 22.4122 | 40.3019 | 261.29 |
|
medmac01/moroccan-qa-v2
|
medmac01
| 2023-06-20T16:40:03Z | 8 | 0 |
transformers
|
[
"transformers",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:medmac01/qa_morocco_history_v1",
"autotrain_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-06-11T01:30:03Z |
---
datasets:
- medmac01/qa_morocco_history_v1
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
|
andersonbcdefg/nous-hermes-13b-ct2
|
andersonbcdefg
| 2023-06-20T16:35:02Z | 5 | 9 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-06-18T18:05:14Z |
8-bit version of Nous Research [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b), quantized using [CTranslate2](https://github.com/OpenNMT/CTranslate2).
## How to Use
The great thing about `ctranslate2` is that it is basically self-contained (other than the tokenizer, for which you'll use a HuggingFace Transformers tokenizer). One quirk is that the translated model (depending which inference/generation method you use) may expect tokens (string) rather than token_ids (int). To get started, use git or huggingface_hub to download this repo. You'll point `ctranslate2` at the folder for inference.
Example:
```python
import ctranslate2
# point it to folder that contains all the files in this repo. here we're calling it nous-hermes-ct2
model = ctranslate2.Generator("nous-hermes-ct2", device="cuda")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Nous-Hermes-13b", use_fast=False)
# get input ids, then turn them back into tokens
input_ids = tokenizer((
"### Instruction: What's the square root of 2?\n\n"
"### Response:")).input_ids
input_tokens = tokenizer.convert_ids_to_tokens(input_ids)
# generate completion, which is an iterator (you can stream tokens as they come out!)
it = model.generate_tokens(
input_tokens,
max_length=100
)
output = [token.token_id for token in it]
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(decoded)
```
There are other methods for inference, including `generate_batch` (no streaming, supports batched inputs), `forward_batch` (only does 1 forward pass of the model), and `score_batch` (computes token-level likelihood & perplexity). See docs [here](https://opennmt.net/CTranslate2/generation.html).
|
YukiNon/khokhlov_alexander_practice
|
YukiNon
| 2023-06-20T16:21:20Z | 0 | 0 | null |
[
"autoencoder",
"fashion-mnist",
"pytorch",
"huggingface",
"region:us"
] | null | 2023-06-20T15:34:26Z |
---
tags:
- autoencoder
- fashion-mnist
- pytorch
- huggingface
---
# Энкодер на датасете fashion_mnist
## Задача
Построить автоэнкодер для датасета Fashion MNIST, который будет принимать на вход изображение и создавать его же изображение на выходе.
## Архитектура
Нейросеть состоит из следующих слоев:
1.Входной слой, принимающий изображение
2.Полносвязный слой с 256 нейронами и функцией активации ReLU
3.Полносвязный слой с 128 нейронами и функцией активации ReLU
4.Скрытый слой, имеющий 50 нейронов и функцию активации ReLU
5.Полносвязный слой с 128 нейронами и функцией активации ReLU
6.Полносвязный слой с 256 нейронами и функцией активации ReLU
7.Выходной слой с функцией активации сигмоид, который производит декодирование изображения
8.Выходной слой, преобразующий выходную карту признаков в изображение

## Общее количество обучаемых параметров
Автоэнкодер содержит 235,978 обучаемых параметров
## Функция оптимизации и функция ошибок
Алгоритм оптимизации - Adam, функция ошибок - бинарная кросс-энтропия.
## Размеры датасета
- Тренировочный датасет: 48,000 изображений
- Валидационный датасет: 16,000 изображений
- Тестовый датасет: 16,000 изображений
## Результаты обучения модели на всех трех датасетах
После 50 эпох обучения модели на тренировочном датасете, результаты на трех датасетах такие:
- Train Loss: 0.26351
- Train Accuracy: 0.27152
- Val Loss: 0.26502
- Val Accuracy: 0.26352
- Test Loss: 0.26442
- Test Accuracy:0.26600
|
gyrov/finalTask
|
gyrov
| 2023-06-20T16:19:16Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T14:15:07Z |
Зачетное задание
## Гуров Б.И.группа 46(1)
Задание №2. Дан датасет mnist по входному изображению определить остаток от деления этой цифры на 2
**1. Описание задачи модели:**
По входному изображению определяет остаток от деления на 2 изображенной цифры.
**2. Изображение модели нейросети:**

**3. Общее кол-во обучаемых параметров:**
Total params: 7,861
**4. Используемый алгоритмы оптимизации и функция ошибки:**
Оптимайзер - стандартный адам, функция ошибки - бинарная кроссэнтропия, так как выходный данные либо 0 либо 1.
**5. Размеры тренировочного, валидационного и тестового датасетов:**
Тренировочный: 48 000
Валидационный: 12 000
Тестовый: 10 000
**6. Результаты обучения модели: loss и accuracy на всех трёх датасетах:**
* Train_loss: 0.054842568933963776
* Train_accuracy: 0.9802291393280029
* Val_loss: 0.08922852575778961
* Val_accuracy: 0.9724166393280029
* Test loss: 0.0881127342581749
* Test accuracy: 0.9726999998092651
|
SHENMU007/neunit_BASE_V9.3
|
SHENMU007
| 2023-06-20T16:07:45Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-14T16:02:53Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
venomdenom/MarkModel
|
venomdenom
| 2023-06-20T15:56:31Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T14:34:00Z |
---
datasets:
- mnist
metrics:
- accuracy
library_name: keras
---
## Задание:
Дан датасет mnist по входному изображению определить цифру;

## Общее количество обучаемых параметров: 269,322
## Используемые алгоритмы:
adam_optimizer - алгоритм оптимизации
sparse_categorical_crossentropy - категориальная кроссэнтропия - функция потерь
## Размеры датасетов:
тренировочный - 10000
тестовый - 10000
## Результаты работы
тренировочный -
Training loss: 0.14755813777446747
Training accuracy: 0.9786666631698608
тестовый -
Validation loss: 0.1685849279165268
Validation accuracy: 0.9717000126838684
## Ссылка на Colab:
https://colab.research.google.com/drive/1TnfNRwHOqq5NjewGWZ3v1B7iEiS-iuFG?usp=sharing
|
norBARA/IA-LLAMA
|
norBARA
| 2023-06-20T15:42:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T15:41:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
catrabbitbear/taxi-v3-attempt1
|
catrabbitbear
| 2023-06-20T15:37:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T15:37:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-attempt1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="catrabbitbear/taxi-v3-attempt1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
feabries/sd-class-butterflies-32
|
feabries
| 2023-06-20T15:35:37Z | 38 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-06-20T15:35:10Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('feabries/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
paumena/BioASQ-BERT
|
paumena
| 2023-06-20T15:29:07Z | 72 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-20T15:08:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: paumena/BioASQ-BERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# paumena/BioASQ-BERT
This model is a fine-tuned version of [paumena/QA-BERT](https://huggingface.co/paumena/QA-BERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4065
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
{'exact_match': 82.21544715447155, 'f1': 85.49528966471387}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1225, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0949 | 0 |
| 0.6130 | 1 |
| 0.4899 | 2 |
| 0.4401 | 3 |
| 0.4065 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/tingyun_hsr
|
LarryAIDraw
| 2023-06-20T15:28:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:19:36Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/10234/tingyun-or-honkai-star-rail
|
catrabbitbear/q-FrozenLake-v1-4x4-noSlippery
|
catrabbitbear
| 2023-06-20T15:26:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T15:26:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="catrabbitbear/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/DakiEyesV4-15
|
LarryAIDraw
| 2023-06-20T15:22:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:16:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/93955/daki-eyes-concept-lora-inpaint
|
LarryAIDraw/ST-Louisv1
|
LarryAIDraw
| 2023-06-20T15:22:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:15:58Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/91079/st-louis-4-outfits-azurlane-4
|
Lajonbot/LaMini-GPT-774M-19000-steps-polish
|
Lajonbot
| 2023-06-20T15:14:20Z | 248 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"pl",
"dataset:databricks/databricks-dolly-15k",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-30T07:32:54Z |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of LaMini-GPT-774M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
**Update: Tokenizer added**
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 'Lajonbot/LaMini-GPT-774M-19000-steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
```
|
theSOL1/kogrammar-distil
|
theSOL1
| 2023-06-20T15:10:35Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"grammar",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T15:55:58Z |
---
language: ko
license: mit
tags:
- bart
- grammar
---
# kogrammar-distil
Dataset: 국립국어원 맞춤법 교정 말뭉치
<br>
<br>
**Backbone Model**:
- [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2/blob/main/README.md)
**Contributors**:
- ChangGeon Kang: [NeverAsking](https://github.com/NeverAsking/)
- Subin Park: [SOL1archive](https://github.com/SOL1archive/)
**GitHub Repo**:
- [SOL1archive/KoGrammar](https://github.com/SOL1archive/KoGrammar)
## Train Method
전체 데이터셋 중 약 67.5%를 학습데이터로 활용하여 학습함.
<br>
SFT Distillation을 이용해 [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base) 모델의 Decoder Layer를 6개에서 3개로 줄여 다시 학습시킴.
## Metric
|BLEU-2|ROUGE-2 F1|
|-|-|
|77.8 %|55.0 %|
|
theSOL1/kogrammar-tiny-distil
|
theSOL1
| 2023-06-20T15:10:13Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"grammar",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T16:09:38Z |
---
language: ko
license: mit
tags:
- bart
- grammar
---
# kogrammar-tiny-distil
Dataset: 국립국어원 맞춤법 교정 말뭉치
<br>
<br>
**Backbone Model**:
- [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2/blob/main/README.md)
- [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base)
**Contributors**:
- ChangGeon Kang: [NeverAsking](https://github.com/NeverAsking/)
- Subin Park: [SOL1archive](https://github.com/SOL1archive/)
**GitHub Repo**:
- [SOL1archive/KoGrammar](https://github.com/SOL1archive/KoGrammar)
## Train Method
전체 데이터셋 중 약 67.5%를 학습데이터로 활용하여 학습함.
<br>
SFT Distillation을 이용해 [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base) 모델의 Decoder Layer를 6개에서 1개로 줄여 다시 학습시킴.
## Metric
|BLEU-2|ROUGE-2 F1|
|-|-|
|77.8 %|55.0 %|
|
SotirisLegkas/Socratic-GODEL-instruct
|
SotirisLegkas
| 2023-06-20T14:54:20Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T13:54:02Z |
---
pipeline_tag: text2text-generation
---
Instruction: given a context, reply as in a Socratic dialogue.
|
KUN1995/ZJP
|
KUN1995
| 2023-06-20T14:32:28Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-20T14:07:52Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ponponnsan/sakura-rinna
|
ponponnsan
| 2023-06-20T14:26:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-20T13:52:32Z |
rinnaちゃんでsakura-japanese-datasetを学習させたときのモデル
|
leukas/mt5-large-nc16-250k-deen
|
leukas
| 2023-06-20T14:12:41Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"translation",
"de",
"en",
"arxiv:2302.14220",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-22T17:08:41Z |
---
language:
- de
- en
pipeline_tag: translation
---
# mt5-large-nc16-250k-deen
This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is an mT5 model finetuned on German-->English translation using 250k sentence pairs from the WMT NewsCommentary v16 dataset.
To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).
NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0).
|
WALIDALI/bekiksrilym
|
WALIDALI
| 2023-06-20T14:02:49Z | 29 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-20T13:50:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekiksrilym Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
kchen621/dqn-SpaceInvadersNoFrameskip-v4
|
kchen621
| 2023-06-20T13:54:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T13:53:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 598.00 +/- 294.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kchen621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kchen621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kchen621
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Graphcore/whisper-small-ipu
|
Graphcore
| 2023-06-20T13:51:42Z | 0 | 0 | null |
[
"optimum_graphcore",
"arxiv:2212.04356",
"license:apache-2.0",
"region:us"
] | null | 2023-06-20T13:37:54Z |
---
license: apache-2.0
---
# Graphcore/whisper-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the Whisper Small model (e.g. [openai/whisper-small](https://huggingface.co/openai/whisper-small)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/whisper-small-ipu")
```
|
hassansoliman/falcon-7b-qlora-utterance-adaptations_v5
|
hassansoliman
| 2023-06-20T13:45:06Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T13:44:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/minotaur-15B-GGML
|
TheBloke
| 2023-06-20T13:19:02Z | 0 | 10 |
transformers
|
[
"transformers",
"code",
"text-generation",
"dataset:bigcode/the-stack-dedup",
"dataset:tiiuae/falcon-refinedweb",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:winglian/evals",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"region:us"
] |
text-generation
| 2023-06-18T11:47:47Z |
---
inference: false
pipeline_tag: text-generation
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
- text: 'Gradient descent is'
example_title: Machine Learning
group: English
- license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OpenAccess AI Collective's Minotaur 15B GGML
These files are GGML format model files for [OpenAccess AI Collective's Minotaur 15B](https://huggingface.co/openaccess-ai-collective/minotaur-15b).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/minotaur-15B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/minotaur-15B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-15b)
## A note regarding context length: 8K
It is confirmed that the 8K context of this model works in [KoboldCpp](https://github.com/LostRuins/koboldcpp), if you manually set max context to 8K by adjusting the text box above the slider:

(set it to 8192 at most)
It is currently unknown as to whether it is compatible with other clients.
If you have feedback on this, please let me know.
## Prompt template
```
USER: <prompt>
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMS Web UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| minotaur-15b.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| minotaur-15b.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| minotaur-15b.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| minotaur-15b.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| minotaur-15b.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OpenAccess AI Collective's Minotaur 15B
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur 15B 8K
Minotaur 15B is an instruct fine-tuned model on top of Starcoder Plus. Minotaur 15B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Minotaur 15B has a context length of 8K tokens, allowing for strong recall at long contexts.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 15B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-16b-8k/runs/tshgbl2k
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 4XA100 80GB
- 1 epochs taking approximately 30 hours
- Trained using QLoRA techniques
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model StarCoder, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
TBD
## Examples
TBD
# StarCoderPlus
Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
[a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** English & 80+ Programming languages
## Use
### Intended use
The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
# Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Finetuning steps:** 150k
- **Finetuning tokens:** 600B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 14 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
gaioNL/a2c-PandaReachDense-v2
|
gaioNL
| 2023-06-20T13:06:12Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T13:03:22Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.62 +/- 0.35
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jvvelzen/ppo-Huggy
|
jvvelzen
| 2023-06-20T12:53:55Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-20T12:53:50Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jvvelzen/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.