modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NotLiame/DiscordBot
|
NotLiame
| 2023-10-30T22:11:07Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T22:02:27Z |
---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
royallab/Echidna-13b-v0.3-exl2
|
royallab
| 2023-10-30T22:09:17Z | 0 | 0 | null |
[
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-10-30T05:02:53Z |
---
license: cc-by-nc-4.0
language:
- en
---
## Information
This is a Exl2 quantized version of [Echidna-13b-v0.3](https://huggingface.co/NeverSleep/Echidna-13b-v0.3)
Please refer to the original creator for more information.
Calibration dataset: [wikitext](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/test)
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Please ask for more bpws in the community tab if necessary.
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
---
|
royallab/Lewd-Sydney-20B-exl2
|
royallab
| 2023-10-30T22:05:12Z | 0 | 0 | null |
[
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-10-30T05:07:29Z |
---
license: cc-by-nc-4.0
language:
- en
---
## Information
This is a Exl2 quantized version of [Lewd-Sydney-20B](https://huggingface.co/Undi95/Lewd-Sydney-20B)
Please refer to the original creator for more information.
Calibration dataset: [wikitext](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/test)
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Please ask for more bpws in the community tab if necessary.
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
---
|
Yntec/Deliberate
|
Yntec
| 2023-10-30T22:00:04Z | 721 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"General",
"Anime",
"Art",
"XpucT",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-20T20:51:16Z |
---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Anime
- Art
- XpucT
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Deliberate
Deliberate 1.0 with the MoistMixV2 VAE baked in for improved details over Deliberate 1.1.
Comparison:

(Click for larger)

Sample and prompt:
Cartoon Pretty CUTE Girl, sitting on Overwatch, DETAILED CHIBI EYES, soaking in the rain, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus, aerial photography, trending on artstation, peter lloyd. Illustration By ROSSDRAWS and Dave Rapoza and artgerm and leyendecker and Clay
Original page:
https://huggingface.co/XpucT/Deliberate
|
ericrong888/logo_classifier
|
ericrong888
| 2023-10-30T21:53:50Z | 80 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-13T03:43:39Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: ericrong888/logo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ericrong888/logo_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7196
- Validation Loss: 0.8069
- Train Accuracy: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 75, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1054 | 1.0410 | 0.8333 | 0 |
| 0.9869 | 0.9692 | 0.8333 | 1 |
| 0.8856 | 0.9035 | 1.0 | 2 |
| 0.8117 | 0.8585 | 1.0 | 3 |
| 0.7196 | 0.8069 | 1.0 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
puchki2015/maitra1982-finetuned-bert-mrpc
|
puchki2015
| 2023-10-30T21:46:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T21:43:25Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: maitra1982-finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8926746166950595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# maitra1982-finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4454
- Accuracy: 0.8456
- F1: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5477 | 1.0 | 230 | 0.4645 | 0.8039 | 0.8701 |
| 0.3141 | 2.0 | 460 | 0.4186 | 0.8382 | 0.89 |
| 0.1597 | 3.0 | 690 | 0.4454 | 0.8456 | 0.8927 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
iago22/mcpose
|
iago22
| 2023-10-30T21:39:58Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-10-30T21:39:58Z |
---
license: other
license_name: mcpose
license_link: LICENSE
---
|
TheBloke/Uncensored-Jordan-13B-GGUF
|
TheBloke
| 2023-10-30T21:25:22Z | 247 | 5 |
transformers
|
[
"transformers",
"gguf",
"llama",
"en",
"base_model:ajibawa-2023/Uncensored-Jordan-13B",
"base_model:quantized:ajibawa-2023/Uncensored-Jordan-13B",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-10-30T20:18:32Z |
---
base_model: ajibawa-2023/Uncensored-Jordan-13B
inference: false
language:
- en
license: cc-by-nc-nd-4.0
model_creator: Feynman Innovations
model_name: Uncensored Jordan 13B
model_type: llama
prompt_template: 'This is a conversation with your Uncensored AI assistant Jordan.
Jordan can discuss with you at length about various topics such as ---, ---, ---,
. You can ask it anything you want and it will do its best to give you accurate
and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Uncensored Jordan 13B - GGUF
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovations's Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Jordan
```
This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovations's Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [uncensored-jordan-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [uncensored-jordan-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [uncensored-jordan-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [uncensored-jordan-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [uncensored-jordan-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [uncensored-jordan-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [uncensored-jordan-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [uncensored-jordan-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [uncensored-jordan-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [uncensored-jordan-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [uncensored-jordan-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [uncensored-jordan-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Uncensored-Jordan-13B-GGUF and below it, a specific filename to download, such as: uncensored-jordan-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF uncensored-jordan-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF uncensored-jordan-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m uncensored-jordan-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a Uncensored AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Uncensored-Jordan-13B-GGUF", model_file="uncensored-jordan-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovations's Uncensored Jordan 13B
**Jordan: An Uncensored Model**
The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street."
Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations.
Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries.
Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints.
Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms.
It is trained on around 155000 set of conversations. Each set having 10~15 conversations. We will not release this data.
**Warning**
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects.
Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 77 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
**GPTQ GGML & AWQ**
GPTQ: TBA
GGUF: TBA
AWQ: TBA
**Example Prompt:**
```
This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: <prompt>
ASSISTANT:
```
<!-- original-model-card end -->
|
TheBloke/Nethena-20B-GPTQ
|
TheBloke
| 2023-10-30T21:23:33Z | 52 | 7 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:NeverSleep/Nethena-20B",
"base_model:quantized:NeverSleep/Nethena-20B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-30T20:09:21Z |
---
base_model: NeverSleep/Nethena-20B
inference: false
license: cc-by-nc-4.0
model_creator: NeverSleep
model_name: Nethena 20B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nethena 20B - GPTQ
- Model creator: [NeverSleep](https://huggingface.co/NeverSleep)
- Original model: [Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [NeverSleep's Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nethena-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nethena-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nethena-20B-GGUF)
* [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Nethena-20B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B).
<!-- licensing end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KobaldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 10.52 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 10.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 12.04 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.41 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 20.35 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 9.51 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nethena-20B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 20.80 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Nethena-20B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nethena-20B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Nethena-20B-GPTQ`:
```shell
mkdir Nethena-20B-GPTQ
huggingface-cli download TheBloke/Nethena-20B-GPTQ --local-dir Nethena-20B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Nethena-20B-GPTQ
huggingface-cli download TheBloke/Nethena-20B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Nethena-20B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Nethena-20B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nethena-20B-GPTQ --local-dir Nethena-20B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Nethena-20B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nethena-20B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Nethena-20B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nethena-20B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Nethena-20B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Nethena-20B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NeverSleep's Nethena 20B

# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Nethena-20B model. Use Alpaca format. Suitable for RP, ERP and general stuff.
What would happen if we combine all of out best models? Well.. here it is, the holy grail: **Echidna v0.3** + **Athena v3** + **Nete**
This model also has a 13b version, you can check it out right [here](https://huggingface.co/NeverSleep/Nethena-13B).
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Nethena-20B.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-20B)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!--[exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-20B-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-20B-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- NeverSleep/Echidna-13b-v0.3
- IkariDev/Athena-v3
- Undi95/Nete-13B
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
li-ping/river_retriver_416data_v3
|
li-ping
| 2023-10-30T21:17:15Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-30T21:17:07Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# li-ping/river_retriver_416data_v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('li-ping/river_retriver_416data_v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=li-ping/river_retriver_416data_v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 791 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 400,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
kunhanw/mms_gn_fine_tune
|
kunhanw
| 2023-10-30T21:13:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-30T20:51:31Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: mms_gn_fine_tune
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: gn
split: test
args: gn
metrics:
- name: Wer
type: wer
value: 0.329064919594997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms_gn_fine_tune
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1811
- Wer: 0.3291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.0552 | 1.79 | 100 | 0.2300 | 0.3880 |
| 0.2259 | 3.57 | 200 | 0.1811 | 0.3291 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Kooten/Nethena-20B-3bpw-h8-exl2
|
Kooten
| 2023-10-30T21:11:47Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T17:21:48Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Nethena-20B](https://huggingface.co/NeverSleep/Nethena-20B)
3 BPW, Head bit set to 8
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## VRAM
My VRAM usage with 20B models are:
| Bits per weight | Context | VRAM |
|--|--|--|
| 6bpw | 4k | 24gb |
| 4bpw | 4k | 18gb |
| 4bpw | 8k | 24gb |
| 3bpw | 4k | 16gb |
| 3bpw | 8k | 21gb |
I have rounded up, these arent exact numbers, this is also on a windows machine.
|
imi2/openbuddy-falcon-180b-v13-preview1-GGUF
|
imi2
| 2023-10-30T20:40:18Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-10-30T09:15:50Z |
tested the quantized models, both load correctly as of 41aee4d in llama.cpp.
|
waldie/Nethena-20B-4bpw-h6-exl2
|
waldie
| 2023-10-30T20:37:06Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T20:07:32Z |
---
license: cc-by-nc-4.0
---
quant of [IkariDev's](https://huggingface.co/IkariDev) and [Undi95's](https://huggingface.co/Undi95) [Nethena-20B](https://huggingface.co/NeverSleep/Nethena-20B)
wikitext used as calibration dataset.
|
gstoica3/roberta-large-peft-rte
|
gstoica3
| 2023-10-30T20:35:05Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | 2023-10-30T20:35:05Z |
---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Samiel999/ppo-LunarLander-v2
|
Samiel999
| 2023-10-30T20:29:36Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-30T20:29:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.37 +/- 23.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
petermutwiri/Tiny_Bert_Cupstone
|
petermutwiri
| 2023-10-30T20:18:28Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:huawei-noah/TinyBERT_General_4L_312D",
"base_model:finetune:huawei-noah/TinyBERT_General_4L_312D",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T19:46:41Z |
---
base_model: huawei-noah/TinyBERT_General_4L_312D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Tiny_Bert_Cupstone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tiny_Bert_Cupstone
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3333
- Accuracy: 0.8550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.524 | 0.2 | 500 | 0.4015 | 0.8318 |
| 0.4268 | 0.4 | 1000 | 0.4274 | 0.8279 |
| 0.39 | 0.6 | 1500 | 0.3743 | 0.8502 |
| 0.3674 | 0.8 | 2000 | 0.3333 | 0.8550 |
| 0.3687 | 1.0 | 2500 | 0.3836 | 0.8585 |
| 0.3489 | 1.2 | 3000 | 0.3927 | 0.8548 |
| 0.3193 | 1.41 | 3500 | 0.3938 | 0.8669 |
| 0.3525 | 1.61 | 4000 | 0.3717 | 0.8753 |
| 0.3327 | 1.81 | 4500 | 0.4589 | 0.8573 |
| 0.3276 | 2.01 | 5000 | 0.3676 | 0.8791 |
| 0.285 | 2.21 | 5500 | 0.4196 | 0.8811 |
| 0.2757 | 2.41 | 6000 | 0.3973 | 0.8777 |
| 0.277 | 2.61 | 6500 | 0.4198 | 0.8805 |
| 0.2834 | 2.81 | 7000 | 0.4955 | 0.8739 |
| 0.338 | 3.01 | 7500 | 0.4383 | 0.8844 |
| 0.2499 | 3.21 | 8000 | 0.4745 | 0.8785 |
| 0.2405 | 3.41 | 8500 | 0.4794 | 0.8854 |
| 0.2648 | 3.61 | 9000 | 0.4576 | 0.8844 |
| 0.2379 | 3.81 | 9500 | 0.4395 | 0.8886 |
| 0.2343 | 4.01 | 10000 | 0.5088 | 0.8791 |
| 0.2011 | 4.22 | 10500 | 0.5272 | 0.8781 |
| 0.2198 | 4.42 | 11000 | 0.5235 | 0.8765 |
| 0.2343 | 4.62 | 11500 | 0.5019 | 0.8844 |
| 0.194 | 4.82 | 12000 | 0.5227 | 0.8791 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
MeghanaArakkal/TuringChat
|
MeghanaArakkal
| 2023-10-30T20:10:33Z | 0 | 1 | null |
[
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-30T20:01:02Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: 2_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_epochs
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Kooten/Nethena-13B-4bpw-h8-exl2
|
Kooten
| 2023-10-30T19:52:02Z | 52 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T16:02:13Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Nethena-13B](https://huggingface.co/NeverSleep/Nethena-13B)
4 BPW, Head bit set to 8
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## VRAM
My VRAM usage with 13B models are:
| Bits per weight | Context | VRAM |
|--|--|--|
| 8bpw | 8k | 22gb |
| 8bpw | 4k | 19gb |
| 6bpw | 8k | 19gb |
| 6bpw | 4k | 16gb |
| 4bpw | 8k | 16gb |
| 4bpw | 4k | 13gb |
| 3bpw | 8k | 15gb |
| 3bpw | 4k | 12gb |
I have rounded up, these arent exact numbers, this is also on a windows machine, they should be slightly lower on linux.
|
diana9m/swin-tiny-patch4-window7-224-finetuned-eurosat
|
diana9m
| 2023-10-30T19:42:25Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-19T09:21:46Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5666
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 4.5666 | 0.7778 |
| 5.077 | 2.0 | 13 | 1.7078 | 0.7778 |
| 5.077 | 2.77 | 18 | 1.4156 | 0.7778 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
Ephicho/NLP_Capstone
|
Ephicho
| 2023-10-30T19:28:08Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:huawei-noah/TinyBERT_General_4L_312D",
"base_model:finetune:huawei-noah/TinyBERT_General_4L_312D",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-26T07:28:23Z |
---
base_model: huawei-noah/TinyBERT_General_4L_312D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLP_Capstone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Capstone
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5352
- Accuracy: 0.7654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
AmelieSchreiber/phi_1_5_vicgalle_alpaca-gpt4
|
AmelieSchreiber
| 2023-10-30T19:23:11Z | 3 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2023-10-30T18:45:06Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
sunyijia97/lora-trained-xl-colab-doll-v1_5
|
sunyijia97
| 2023-10-30T19:22:52Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-30T03:16:23Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of che1se4
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sunyijia97/lora-trained-xl-colab-doll-v2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of che1se4 using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
laion/larger_clap_music
|
laion
| 2023-10-30T19:17:40Z | 5,923 | 25 |
transformers
|
[
"transformers",
"pytorch",
"clap",
"feature-extraction",
"arxiv:2211.06687",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-10-30T18:16:15Z |
---
license: apache-2.0
---
# Model
## TL;DR
CLAP is to audio what CLIP is to image. This is an improved CLAP checkpoint, specifically trained on music.
## Description
CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score.
# Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/larger_clap_music")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/larger_clap_music")
processor = ClapProcessor.from_pretrained("laion/larger_clap_music")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/larger_clap_music").to(0)
processor = ClapProcessor.from_pretrained("laion/larger_clap_music")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
```
# Citation
If you are using this model for your work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
schubertcarvalho/text_summarization_t5_trainer
|
schubertcarvalho
| 2023-10-30T19:16:42Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-10-30T19:14:24Z |
---
license: apache-2.0
base_model: t5-small
tags:
- summarization
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: text_summarization_t5_trainer
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_summarization_t5_trainer
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9562
- Rouge1: 0.1285
- Rouge2: 0.0396
- Rougel: 0.1104
- Rougelsum: 0.1102
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 16 | 3.5925 | 0.1421 | 0.0501 | 0.1208 | 0.1207 | 19.0 |
| No log | 2.0 | 32 | 3.1487 | 0.1339 | 0.0428 | 0.1146 | 0.1145 | 19.0 |
| No log | 3.0 | 48 | 2.9987 | 0.1285 | 0.04 | 0.1101 | 0.1099 | 19.0 |
| No log | 4.0 | 64 | 2.9562 | 0.1285 | 0.0396 | 0.1104 | 0.1102 | 19.0 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0a0+29c30b1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
dcfidalgo/ppo-LunarLander-v2
|
dcfidalgo
| 2023-10-30T19:03:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-30T15:34:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.81 +/- 12.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JatinKumar/q-FrozenLake-v1-4x4-noSlippery
|
JatinKumar
| 2023-10-30T18:56:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-30T18:56:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JatinKumar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kwwww/bert-base-uncased-test_2_100
|
kwwww
| 2023-10-30T18:55:55Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-10-30T14:13:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-base-uncased-test_2_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-test_2_100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3742
- F1: {'f1': 0.8207293666026871}
- Accuracy: {'accuracy': 0.8132}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------:|:--------------------:|
| No log | 1.0 | 7 | 0.6867 | {'f1': 0.6673895364597453} | {'accuracy': 0.5092} |
| No log | 2.0 | 14 | 0.6819 | {'f1': 0.5501760563380282} | {'accuracy': 0.5912} |
| No log | 3.0 | 21 | 0.6808 | {'f1': 0.37319468515309073} | {'accuracy': 0.566} |
| No log | 4.0 | 28 | 0.6778 | {'f1': 0.3979706877113867} | {'accuracy': 0.5728} |
| No log | 5.0 | 35 | 0.6748 | {'f1': 0.432258064516129} | {'accuracy': 0.5776} |
| No log | 6.0 | 42 | 0.6702 | {'f1': 0.5789250952179433} | {'accuracy': 0.602} |
| No log | 7.0 | 49 | 0.6664 | {'f1': 0.5185891325071497} | {'accuracy': 0.596} |
| No log | 8.0 | 56 | 0.6615 | {'f1': 0.5394378966455122} | {'accuracy': 0.5936} |
| No log | 9.0 | 63 | 0.6583 | {'f1': 0.5796124684077507} | {'accuracy': 0.6008} |
| No log | 10.0 | 70 | 0.6547 | {'f1': 0.628030303030303} | {'accuracy': 0.6072} |
| No log | 11.0 | 77 | 0.6429 | {'f1': 0.555812876331635} | {'accuracy': 0.6164} |
| No log | 12.0 | 84 | 0.6200 | {'f1': 0.6544731610337972} | {'accuracy': 0.6524} |
| No log | 13.0 | 91 | 0.6054 | {'f1': 0.6861480075901328} | {'accuracy': 0.6692} |
| No log | 14.0 | 98 | 0.5944 | {'f1': 0.6591107236268527} | {'accuracy': 0.6872} |
| No log | 15.0 | 105 | 0.5802 | {'f1': 0.6939109113199837} | {'accuracy': 0.7004} |
| No log | 16.0 | 112 | 0.5801 | {'f1': 0.7122069523039612} | {'accuracy': 0.7152} |
| No log | 17.0 | 119 | 0.5862 | {'f1': 0.7172413793103448} | {'accuracy': 0.7212} |
| No log | 18.0 | 126 | 0.6508 | {'f1': 0.7453769559032717} | {'accuracy': 0.7136} |
| No log | 19.0 | 133 | 0.5935 | {'f1': 0.7325581395348837} | {'accuracy': 0.7424} |
| No log | 20.0 | 140 | 0.6193 | {'f1': 0.7265029635901777} | {'accuracy': 0.7416} |
| No log | 21.0 | 147 | 0.6967 | {'f1': 0.7574221578566257} | {'accuracy': 0.732} |
| No log | 22.0 | 154 | 0.6781 | {'f1': 0.7065267001369236} | {'accuracy': 0.7428} |
| No log | 23.0 | 161 | 0.6566 | {'f1': 0.7692898272552784} | {'accuracy': 0.7596} |
| No log | 24.0 | 168 | 0.6656 | {'f1': 0.7717265353418308} | {'accuracy': 0.7636} |
| No log | 25.0 | 175 | 0.6746 | {'f1': 0.7662650602409639} | {'accuracy': 0.7672} |
| No log | 26.0 | 182 | 0.7001 | {'f1': 0.7759433962264151} | {'accuracy': 0.772} |
| No log | 27.0 | 189 | 0.7292 | {'f1': 0.7441063009001286} | {'accuracy': 0.7612} |
| No log | 28.0 | 196 | 0.7418 | {'f1': 0.7610474631751227} | {'accuracy': 0.7664} |
| No log | 29.0 | 203 | 0.7614 | {'f1': 0.751592356687898} | {'accuracy': 0.766} |
| No log | 30.0 | 210 | 0.7697 | {'f1': 0.7806022682831443} | {'accuracy': 0.7756} |
| No log | 31.0 | 217 | 0.7885 | {'f1': 0.7721265518622348} | {'accuracy': 0.7724} |
| No log | 32.0 | 224 | 0.8062 | {'f1': 0.7642209398186316} | {'accuracy': 0.7712} |
| No log | 33.0 | 231 | 0.8301 | {'f1': 0.7805456702253853} | {'accuracy': 0.778} |
| No log | 34.0 | 238 | 0.8503 | {'f1': 0.7807570977917981} | {'accuracy': 0.7776} |
| No log | 35.0 | 245 | 0.9258 | {'f1': 0.7845180498697432} | {'accuracy': 0.7684} |
| No log | 36.0 | 252 | 0.9121 | {'f1': 0.7879472693032015} | {'accuracy': 0.7748} |
| No log | 37.0 | 259 | 0.8719 | {'f1': 0.7829238824003222} | {'accuracy': 0.7844} |
| No log | 38.0 | 266 | 0.9147 | {'f1': 0.7897748950782144} | {'accuracy': 0.7796} |
| No log | 39.0 | 273 | 0.8983 | {'f1': 0.7862013638186923} | {'accuracy': 0.7868} |
| No log | 40.0 | 280 | 0.9294 | {'f1': 0.7913779830638953} | {'accuracy': 0.7832} |
| No log | 41.0 | 287 | 0.9203 | {'f1': 0.7841269841269841} | {'accuracy': 0.7824} |
| No log | 42.0 | 294 | 0.9434 | {'f1': 0.7949405902644691} | {'accuracy': 0.786} |
| No log | 43.0 | 301 | 0.9415 | {'f1': 0.7944465869649053} | {'accuracy': 0.7868} |
| No log | 44.0 | 308 | 0.9479 | {'f1': 0.770859805167302} | {'accuracy': 0.7836} |
| No log | 45.0 | 315 | 0.9805 | {'f1': 0.7955927051671733} | {'accuracy': 0.7848} |
| No log | 46.0 | 322 | 0.9753 | {'f1': 0.788184998056743} | {'accuracy': 0.782} |
| No log | 47.0 | 329 | 0.9732 | {'f1': 0.7798537774167345} | {'accuracy': 0.7832} |
| No log | 48.0 | 336 | 1.0218 | {'f1': 0.7910163684811572} | {'accuracy': 0.7804} |
| No log | 49.0 | 343 | 1.0071 | {'f1': 0.7824056052938886} | {'accuracy': 0.7764} |
| No log | 50.0 | 350 | 0.9941 | {'f1': 0.7769962763756723} | {'accuracy': 0.7844} |
| No log | 51.0 | 357 | 1.1072 | {'f1': 0.7849580138736765} | {'accuracy': 0.7644} |
| No log | 52.0 | 364 | 1.0659 | {'f1': 0.7905048982667672} | {'accuracy': 0.7776} |
| No log | 53.0 | 371 | 1.0176 | {'f1': 0.7758268681094325} | {'accuracy': 0.7804} |
| No log | 54.0 | 378 | 1.0482 | {'f1': 0.7857695282289249} | {'accuracy': 0.7784} |
| No log | 55.0 | 385 | 1.2158 | {'f1': 0.784452296819788} | {'accuracy': 0.756} |
| No log | 56.0 | 392 | 1.1118 | {'f1': 0.7880575009214891} | {'accuracy': 0.77} |
| No log | 57.0 | 399 | 1.0318 | {'f1': 0.7878308968787041} | {'accuracy': 0.7852} |
| No log | 58.0 | 406 | 1.0296 | {'f1': 0.7861178369652946} | {'accuracy': 0.788} |
| No log | 59.0 | 413 | 1.1107 | {'f1': 0.7899034892353377} | {'accuracy': 0.7736} |
| No log | 60.0 | 420 | 1.0667 | {'f1': 0.791124713083397} | {'accuracy': 0.7816} |
| No log | 61.0 | 427 | 1.0478 | {'f1': 0.7916666666666666} | {'accuracy': 0.788} |
| No log | 62.0 | 434 | 1.0506 | {'f1': 0.7908625443087829} | {'accuracy': 0.7876} |
| No log | 63.0 | 441 | 1.0569 | {'f1': 0.7927786499215072} | {'accuracy': 0.7888} |
| No log | 64.0 | 448 | 1.0732 | {'f1': 0.7882534775888718} | {'accuracy': 0.7808} |
| No log | 65.0 | 455 | 1.0744 | {'f1': 0.7902287708414115} | {'accuracy': 0.7836} |
| No log | 66.0 | 462 | 1.0650 | {'f1': 0.7919463087248323} | {'accuracy': 0.7892} |
| No log | 67.0 | 469 | 1.1210 | {'f1': 0.7916981132075471} | {'accuracy': 0.7792} |
| No log | 68.0 | 476 | 1.0886 | {'f1': 0.7925552539744086} | {'accuracy': 0.786} |
| No log | 69.0 | 483 | 1.0712 | {'f1': 0.7895372233400404} | {'accuracy': 0.7908} |
| No log | 70.0 | 490 | 1.0749 | {'f1': 0.7860897695107156} | {'accuracy': 0.7884} |
| No log | 71.0 | 497 | 1.0807 | {'f1': 0.7931446791550419} | {'accuracy': 0.7924} |
| 0.1431 | 72.0 | 504 | 1.0837 | {'f1': 0.7931446791550419} | {'accuracy': 0.7924} |
| 0.1431 | 73.0 | 511 | 1.0897 | {'f1': 0.7936758893280632} | {'accuracy': 0.7912} |
| 0.1431 | 74.0 | 518 | 1.0925 | {'f1': 0.7952755905511811} | {'accuracy': 0.792} |
| 0.1431 | 75.0 | 525 | 1.1018 | {'f1': 0.7951713395638628} | {'accuracy': 0.7896} |
| 0.1431 | 76.0 | 532 | 1.1121 | {'f1': 0.7938104448742745} | {'accuracy': 0.7868} |
| 0.1431 | 77.0 | 539 | 1.1071 | {'f1': 0.7945631067961165} | {'accuracy': 0.7884} |
| 0.1431 | 78.0 | 546 | 1.1149 | {'f1': 0.7944250871080138} | {'accuracy': 0.7876} |
| 0.1431 | 79.0 | 553 | 1.1702 | {'f1': 0.7919312663429211} | {'accuracy': 0.7772} |
| 0.1431 | 80.0 | 560 | 1.1048 | {'f1': 0.7970277669143527} | {'accuracy': 0.7924} |
| 0.1431 | 81.0 | 567 | 1.0988 | {'f1': 0.7942583732057418} | {'accuracy': 0.7936} |
| 0.1431 | 82.0 | 574 | 1.1094 | {'f1': 0.797141722905915} | {'accuracy': 0.7956} |
| 0.1431 | 83.0 | 581 | 1.1293 | {'f1': 0.79408330089529} | {'accuracy': 0.7884} |
| 0.1431 | 84.0 | 588 | 1.1591 | {'f1': 0.7948229920060906} | {'accuracy': 0.7844} |
| 0.1431 | 85.0 | 595 | 1.1706 | {'f1': 0.7921241953805376} | {'accuracy': 0.7804} |
| 0.1431 | 86.0 | 602 | 1.1557 | {'f1': 0.792467332820907} | {'accuracy': 0.784} |
| 0.1431 | 87.0 | 609 | 1.1554 | {'f1': 0.76732249786142} | {'accuracy': 0.7824} |
| 0.1431 | 88.0 | 616 | 1.1516 | {'f1': 0.7946257197696737} | {'accuracy': 0.786} |
| 0.1431 | 89.0 | 623 | 1.2337 | {'f1': 0.7969208211143696} | {'accuracy': 0.7784} |
| 0.1431 | 90.0 | 630 | 1.1372 | {'f1': 0.7978227060653188} | {'accuracy': 0.792} |
| 0.1431 | 91.0 | 637 | 1.1228 | {'f1': 0.7916833266693323} | {'accuracy': 0.7916} |
| 0.1431 | 92.0 | 644 | 1.1289 | {'f1': 0.7952569169960475} | {'accuracy': 0.7928} |
| 0.1431 | 93.0 | 651 | 1.1409 | {'f1': 0.7992187500000001} | {'accuracy': 0.7944} |
| 0.1431 | 94.0 | 658 | 1.1469 | {'f1': 0.7989109295993777} | {'accuracy': 0.7932} |
| 0.1431 | 95.0 | 665 | 1.2357 | {'f1': 0.7549019607843137} | {'accuracy': 0.78} |
| 0.1431 | 96.0 | 672 | 1.1278 | {'f1': 0.789664917238595} | {'accuracy': 0.7916} |
| 0.1431 | 97.0 | 679 | 1.1492 | {'f1': 0.8013937282229966} | {'accuracy': 0.7948} |
| 0.1431 | 98.0 | 686 | 1.1501 | {'f1': 0.7805486284289276} | {'accuracy': 0.7888} |
| 0.1431 | 99.0 | 693 | 1.1785 | {'f1': 0.7683807904802381} | {'accuracy': 0.782} |
| 0.1431 | 100.0 | 700 | 1.1602 | {'f1': 0.7807708246995443} | {'accuracy': 0.7884} |
| 0.1431 | 101.0 | 707 | 1.1585 | {'f1': 0.7963621984974298} | {'accuracy': 0.794} |
| 0.1431 | 102.0 | 714 | 1.2049 | {'f1': 0.7948523845571537} | {'accuracy': 0.7832} |
| 0.1431 | 103.0 | 721 | 1.1969 | {'f1': 0.7960275019098548} | {'accuracy': 0.7864} |
| 0.1431 | 104.0 | 728 | 1.1693 | {'f1': 0.7960552268244576} | {'accuracy': 0.7932} |
| 0.1431 | 105.0 | 735 | 1.1664 | {'f1': 0.7934739355352168} | {'accuracy': 0.7924} |
| 0.1431 | 106.0 | 742 | 1.1675 | {'f1': 0.7937898089171975} | {'accuracy': 0.7928} |
| 0.1431 | 107.0 | 749 | 1.1750 | {'f1': 0.7965299684542587} | {'accuracy': 0.7936} |
| 0.1431 | 108.0 | 756 | 1.1829 | {'f1': 0.7989045383411582} | {'accuracy': 0.7944} |
| 0.1431 | 109.0 | 763 | 1.1870 | {'f1': 0.797818465134398} | {'accuracy': 0.7924} |
| 0.1431 | 110.0 | 770 | 1.1873 | {'f1': 0.7987519500780031} | {'accuracy': 0.7936} |
| 0.1431 | 111.0 | 777 | 1.1899 | {'f1': 0.798443579766537} | {'accuracy': 0.7928} |
| 0.1431 | 112.0 | 784 | 1.2010 | {'f1': 0.798151001540832} | {'accuracy': 0.7904} |
| 0.1431 | 113.0 | 791 | 1.1904 | {'f1': 0.799532892175944} | {'accuracy': 0.794} |
| 0.1431 | 114.0 | 798 | 1.1816 | {'f1': 0.7965299684542587} | {'accuracy': 0.7936} |
| 0.1431 | 115.0 | 805 | 1.1729 | {'f1': 0.7906413876563132} | {'accuracy': 0.7924} |
| 0.1431 | 116.0 | 812 | 1.1751 | {'f1': 0.7868453105968332} | {'accuracy': 0.79} |
| 0.1431 | 117.0 | 819 | 1.1747 | {'f1': 0.7909604519774011} | {'accuracy': 0.7928} |
| 0.1431 | 118.0 | 826 | 1.1807 | {'f1': 0.7957244655581948} | {'accuracy': 0.7936} |
| 0.1431 | 119.0 | 833 | 1.3983 | {'f1': 0.7960199004975125} | {'accuracy': 0.7704} |
| 0.1431 | 120.0 | 840 | 1.3032 | {'f1': 0.7992700729927008} | {'accuracy': 0.78} |
| 0.1431 | 121.0 | 847 | 1.2420 | {'f1': 0.7653997378768019} | {'accuracy': 0.7852} |
| 0.1431 | 122.0 | 854 | 1.1608 | {'f1': 0.7954911433172303} | {'accuracy': 0.7968} |
| 0.1431 | 123.0 | 861 | 1.2434 | {'f1': 0.8047512991833704} | {'accuracy': 0.7896} |
| 0.1431 | 124.0 | 868 | 1.1561 | {'f1': 0.7962662337662338} | {'accuracy': 0.7992} |
| 0.1431 | 125.0 | 875 | 1.1961 | {'f1': 0.7776355100298763} | {'accuracy': 0.7916} |
| 0.1431 | 126.0 | 882 | 1.2566 | {'f1': 0.802962962962963} | {'accuracy': 0.7872} |
| 0.1431 | 127.0 | 889 | 1.1969 | {'f1': 0.8042813455657493} | {'accuracy': 0.7952} |
| 0.1431 | 128.0 | 896 | 1.1668 | {'f1': 0.7972480777013354} | {'accuracy': 0.7996} |
| 0.1431 | 129.0 | 903 | 1.1762 | {'f1': 0.7916152897657213} | {'accuracy': 0.7972} |
| 0.1431 | 130.0 | 910 | 1.1758 | {'f1': 0.791307913079131} | {'accuracy': 0.7964} |
| 0.1431 | 131.0 | 917 | 1.1774 | {'f1': 0.8007889546351085} | {'accuracy': 0.798} |
| 0.1431 | 132.0 | 924 | 1.2013 | {'f1': 0.8047564250095894} | {'accuracy': 0.7964} |
| 0.1431 | 133.0 | 931 | 1.2061 | {'f1': 0.805045871559633} | {'accuracy': 0.796} |
| 0.1431 | 134.0 | 938 | 1.1958 | {'f1': 0.8041714947856314} | {'accuracy': 0.7972} |
| 0.1431 | 135.0 | 945 | 1.1887 | {'f1': 0.8040514218932606} | {'accuracy': 0.7988} |
| 0.1431 | 136.0 | 952 | 1.1840 | {'f1': 0.8040832351786416} | {'accuracy': 0.8004} |
| 0.1431 | 137.0 | 959 | 1.1836 | {'f1': 0.8056648308418568} | {'accuracy': 0.8024} |
| 0.1431 | 138.0 | 966 | 1.1792 | {'f1': 0.8033175355450237} | {'accuracy': 0.8008} |
| 0.1431 | 139.0 | 973 | 1.1881 | {'f1': 0.8073322932917315} | {'accuracy': 0.8024} |
| 0.1431 | 140.0 | 980 | 1.2032 | {'f1': 0.8058551617873652} | {'accuracy': 0.7984} |
| 0.1431 | 141.0 | 987 | 1.2021 | {'f1': 0.8070987654320988} | {'accuracy': 0.8} |
| 0.1431 | 142.0 | 994 | 1.2005 | {'f1': 0.8061895551257253} | {'accuracy': 0.7996} |
| 0.0009 | 143.0 | 1001 | 1.1952 | {'f1': 0.8074679113185532} | {'accuracy': 0.802} |
| 0.0009 | 144.0 | 1008 | 1.1926 | {'f1': 0.8085937499999999} | {'accuracy': 0.804} |
| 0.0009 | 145.0 | 1015 | 1.1915 | {'f1': 0.8079780993351583} | {'accuracy': 0.8036} |
| 0.0009 | 146.0 | 1022 | 1.1910 | {'f1': 0.8067424539396315} | {'accuracy': 0.8028} |
| 0.0009 | 147.0 | 1029 | 1.1865 | {'f1': 0.806948282668772} | {'accuracy': 0.8044} |
| 0.0009 | 148.0 | 1036 | 1.1827 | {'f1': 0.8025528520143598} | {'accuracy': 0.802} |
| 0.0009 | 149.0 | 1043 | 1.1839 | {'f1': 0.8004866180048661} | {'accuracy': 0.8032} |
| 0.0009 | 150.0 | 1050 | 1.1840 | {'f1': 0.8009708737864079} | {'accuracy': 0.8032} |
| 0.0009 | 151.0 | 1057 | 1.1846 | {'f1': 0.8025682182985554} | {'accuracy': 0.8032} |
| 0.0009 | 152.0 | 1064 | 1.1869 | {'f1': 0.8039840637450199} | {'accuracy': 0.8032} |
| 0.0009 | 153.0 | 1071 | 1.1888 | {'f1': 0.8044515103338633} | {'accuracy': 0.8032} |
| 0.0009 | 154.0 | 1078 | 1.2019 | {'f1': 0.8078124999999999} | {'accuracy': 0.8032} |
| 0.0009 | 155.0 | 1085 | 1.2122 | {'f1': 0.8083785880527542} | {'accuracy': 0.8024} |
| 0.0009 | 156.0 | 1092 | 1.2193 | {'f1': 0.8083462132921175} | {'accuracy': 0.8016} |
| 0.0009 | 157.0 | 1099 | 1.2198 | {'f1': 0.8083462132921175} | {'accuracy': 0.8016} |
| 0.0009 | 158.0 | 1106 | 1.2121 | {'f1': 0.8087261394624076} | {'accuracy': 0.8036} |
| 0.0009 | 159.0 | 1113 | 1.2084 | {'f1': 0.8078124999999999} | {'accuracy': 0.8032} |
| 0.0009 | 160.0 | 1120 | 1.2091 | {'f1': 0.8078124999999999} | {'accuracy': 0.8032} |
| 0.0009 | 161.0 | 1127 | 1.2117 | {'f1': 0.8074970714564623} | {'accuracy': 0.8028} |
| 0.0009 | 162.0 | 1134 | 1.2270 | {'f1': 0.7828668363019508} | {'accuracy': 0.7952} |
| 0.0009 | 163.0 | 1141 | 1.2069 | {'f1': 0.8028503562945369} | {'accuracy': 0.8008} |
| 0.0009 | 164.0 | 1148 | 1.4732 | {'f1': 0.8007054673721341} | {'accuracy': 0.774} |
| 0.0009 | 165.0 | 1155 | 1.2911 | {'f1': 0.8055451479955038} | {'accuracy': 0.7924} |
| 0.0009 | 166.0 | 1162 | 1.2061 | {'f1': 0.8075709779179809} | {'accuracy': 0.8048} |
| 0.0009 | 167.0 | 1169 | 1.2534 | {'f1': 0.8086070215175539} | {'accuracy': 0.7972} |
| 0.0009 | 168.0 | 1176 | 1.2814 | {'f1': 0.8092744951383695} | {'accuracy': 0.796} |
| 0.0009 | 169.0 | 1183 | 1.2533 | {'f1': 0.8111361926260346} | {'accuracy': 0.7992} |
| 0.0009 | 170.0 | 1190 | 1.2007 | {'f1': 0.8126959247648903} | {'accuracy': 0.8088} |
| 0.0009 | 171.0 | 1197 | 1.1935 | {'f1': 0.8106180665610143} | {'accuracy': 0.8088} |
| 0.0009 | 172.0 | 1204 | 1.1932 | {'f1': 0.8079522862823061} | {'accuracy': 0.8068} |
| 0.0009 | 173.0 | 1211 | 1.1938 | {'f1': 0.8079522862823061} | {'accuracy': 0.8068} |
| 0.0009 | 174.0 | 1218 | 1.1952 | {'f1': 0.8095238095238094} | {'accuracy': 0.808} |
| 0.0009 | 175.0 | 1225 | 1.1973 | {'f1': 0.8118577075098814} | {'accuracy': 0.8096} |
| 0.0009 | 176.0 | 1232 | 1.2001 | {'f1': 0.8123028391167193} | {'accuracy': 0.8096} |
| 0.0009 | 177.0 | 1239 | 1.2003 | {'f1': 0.8126232741617356} | {'accuracy': 0.81} |
| 0.0009 | 178.0 | 1246 | 1.1996 | {'f1': 0.8104678826328311} | {'accuracy': 0.8088} |
| 0.0009 | 179.0 | 1253 | 1.1999 | {'f1': 0.8095238095238094} | {'accuracy': 0.808} |
| 0.0009 | 180.0 | 1260 | 1.2009 | {'f1': 0.8104678826328311} | {'accuracy': 0.8088} |
| 0.0009 | 181.0 | 1267 | 1.2028 | {'f1': 0.8126482213438735} | {'accuracy': 0.8104} |
| 0.0009 | 182.0 | 1274 | 1.2050 | {'f1': 0.8130914826498422} | {'accuracy': 0.8104} |
| 0.0009 | 183.0 | 1281 | 1.2959 | {'f1': 0.8094170403587443} | {'accuracy': 0.796} |
| 0.0009 | 184.0 | 1288 | 1.4564 | {'f1': 0.8015647226173542} | {'accuracy': 0.7768} |
| 0.0009 | 185.0 | 1295 | 1.2213 | {'f1': 0.8090154211150652} | {'accuracy': 0.8068} |
| 0.0009 | 186.0 | 1302 | 1.2472 | {'f1': 0.7836355967946014} | {'accuracy': 0.7948} |
| 0.0009 | 187.0 | 1309 | 1.2286 | {'f1': 0.8066561014263074} | {'accuracy': 0.8048} |
| 0.0009 | 188.0 | 1316 | 1.2583 | {'f1': 0.8121866563825684} | {'accuracy': 0.8052} |
| 0.0009 | 189.0 | 1323 | 1.2744 | {'f1': 0.8105423987776929} | {'accuracy': 0.8016} |
| 0.0009 | 190.0 | 1330 | 1.2877 | {'f1': 0.8078967350037963} | {'accuracy': 0.7976} |
| 0.0009 | 191.0 | 1337 | 1.2626 | {'f1': 0.8108317214700194} | {'accuracy': 0.8044} |
| 0.0009 | 192.0 | 1344 | 1.2989 | {'f1': 0.7748058671268335} | {'accuracy': 0.7912} |
| 0.0009 | 193.0 | 1351 | 1.2673 | {'f1': 0.7831174258253238} | {'accuracy': 0.7924} |
| 0.0009 | 194.0 | 1358 | 1.2525 | {'f1': 0.8090332805071315} | {'accuracy': 0.8072} |
| 0.0009 | 195.0 | 1365 | 1.2736 | {'f1': 0.810077519379845} | {'accuracy': 0.804} |
| 0.0009 | 196.0 | 1372 | 1.3521 | {'f1': 0.8102297998517419} | {'accuracy': 0.7952} |
| 0.0009 | 197.0 | 1379 | 1.3654 | {'f1': 0.8086828550404709} | {'accuracy': 0.792} |
| 0.0009 | 198.0 | 1386 | 1.3538 | {'f1': 0.8093126385809312} | {'accuracy': 0.7936} |
| 0.0009 | 199.0 | 1393 | 1.2624 | {'f1': 0.8131782945736433} | {'accuracy': 0.8072} |
| 0.0009 | 200.0 | 1400 | 1.2467 | {'f1': 0.7957166392092258} | {'accuracy': 0.8016} |
| 0.0009 | 201.0 | 1407 | 1.2774 | {'f1': 0.7833474936278675} | {'accuracy': 0.796} |
| 0.0009 | 202.0 | 1414 | 1.2753 | {'f1': 0.7833827893175075} | {'accuracy': 0.7956} |
| 0.0009 | 203.0 | 1421 | 1.2851 | {'f1': 0.8121398386477141} | {'accuracy': 0.8044} |
| 0.0009 | 204.0 | 1428 | 1.4365 | {'f1': 0.8037585833032164} | {'accuracy': 0.7828} |
| 0.0009 | 205.0 | 1435 | 1.4102 | {'f1': 0.8037997807818781} | {'accuracy': 0.7852} |
| 0.0009 | 206.0 | 1442 | 1.3754 | {'f1': 0.8053293856402663} | {'accuracy': 0.7896} |
| 0.0009 | 207.0 | 1449 | 1.3527 | {'f1': 0.8046407185628742} | {'accuracy': 0.7912} |
| 0.0009 | 208.0 | 1456 | 1.3362 | {'f1': 0.8088955898982284} | {'accuracy': 0.7972} |
| 0.0009 | 209.0 | 1463 | 1.3206 | {'f1': 0.8138561096307575} | {'accuracy': 0.8044} |
| 0.0009 | 210.0 | 1470 | 1.3094 | {'f1': 0.8134814247414783} | {'accuracy': 0.8052} |
| 0.0009 | 211.0 | 1477 | 1.3024 | {'f1': 0.813389765294344} | {'accuracy': 0.806} |
| 0.0009 | 212.0 | 1484 | 1.2958 | {'f1': 0.8108317214700194} | {'accuracy': 0.8044} |
| 0.0009 | 213.0 | 1491 | 1.2930 | {'f1': 0.8102444703143191} | {'accuracy': 0.8044} |
| 0.0009 | 214.0 | 1498 | 1.2977 | {'f1': 0.8108317214700194} | {'accuracy': 0.8044} |
| 0.0003 | 215.0 | 1505 | 1.2979 | {'f1': 0.8109992254066616} | {'accuracy': 0.8048} |
| 0.0003 | 216.0 | 1512 | 1.3123 | {'f1': 0.8141321044546852} | {'accuracy': 0.8064} |
| 0.0003 | 217.0 | 1519 | 1.3245 | {'f1': 0.8129770992366412} | {'accuracy': 0.804} |
| 0.0003 | 218.0 | 1526 | 1.3279 | {'f1': 0.8126669210225105} | {'accuracy': 0.8036} |
| 0.0003 | 219.0 | 1533 | 1.3249 | {'f1': 0.813287514318442} | {'accuracy': 0.8044} |
| 0.0003 | 220.0 | 1540 | 1.3202 | {'f1': 0.8147013782542114} | {'accuracy': 0.8064} |
| 0.0003 | 221.0 | 1547 | 1.3125 | {'f1': 0.8112480739599385} | {'accuracy': 0.804} |
| 0.0003 | 222.0 | 1554 | 1.3040 | {'f1': 0.8105385509492445} | {'accuracy': 0.8044} |
| 0.0003 | 223.0 | 1561 | 1.3616 | {'f1': 0.8061492313460817} | {'accuracy': 0.7932} |
| 0.0003 | 224.0 | 1568 | 1.6007 | {'f1': 0.7990196078431372} | {'accuracy': 0.7704} |
| 0.0003 | 225.0 | 1575 | 1.5556 | {'f1': 0.8007054673721341} | {'accuracy': 0.774} |
| 0.0003 | 226.0 | 1582 | 1.4173 | {'f1': 0.8058608058608058} | {'accuracy': 0.788} |
| 0.0003 | 227.0 | 1589 | 1.2708 | {'f1': 0.8091844813935075} | {'accuracy': 0.8072} |
| 0.0003 | 228.0 | 1596 | 1.2721 | {'f1': 0.7967145790554415} | {'accuracy': 0.802} |
| 0.0003 | 229.0 | 1603 | 1.2797 | {'f1': 0.7948611686697058} | {'accuracy': 0.802} |
| 0.0003 | 230.0 | 1610 | 1.2756 | {'f1': 0.7977020927369718} | {'accuracy': 0.8028} |
| 0.0003 | 231.0 | 1617 | 1.2732 | {'f1': 0.7987012987012987} | {'accuracy': 0.8016} |
| 0.0003 | 232.0 | 1624 | 1.2735 | {'f1': 0.8037007240547064} | {'accuracy': 0.8048} |
| 0.0003 | 233.0 | 1631 | 1.2756 | {'f1': 0.8060775689724111} | {'accuracy': 0.806} |
| 0.0003 | 234.0 | 1638 | 1.2775 | {'f1': 0.8087649402390439} | {'accuracy': 0.808} |
| 0.0003 | 235.0 | 1645 | 1.2786 | {'f1': 0.8084428514536042} | {'accuracy': 0.8076} |
| 0.0003 | 236.0 | 1652 | 1.2803 | {'f1': 0.8068362480127186} | {'accuracy': 0.8056} |
| 0.0003 | 237.0 | 1659 | 1.2827 | {'f1': 0.8076009501187648} | {'accuracy': 0.8056} |
| 0.0003 | 238.0 | 1666 | 1.2816 | {'f1': 0.8071570576540756} | {'accuracy': 0.806} |
| 0.0003 | 239.0 | 1673 | 1.2808 | {'f1': 0.8068635275339185} | {'accuracy': 0.8064} |
| 0.0003 | 240.0 | 1680 | 1.2807 | {'f1': 0.8065547561950439} | {'accuracy': 0.8064} |
| 0.0003 | 241.0 | 1687 | 1.2794 | {'f1': 0.8032193158953722} | {'accuracy': 0.8044} |
| 0.0003 | 242.0 | 1694 | 1.2994 | {'f1': 0.791578947368421} | {'accuracy': 0.802} |
| 0.0003 | 243.0 | 1701 | 1.3223 | {'f1': 0.7840616966580977} | {'accuracy': 0.7984} |
| 0.0003 | 244.0 | 1708 | 1.2878 | {'f1': 0.7956810631229236} | {'accuracy': 0.8032} |
| 0.0003 | 245.0 | 1715 | 1.2761 | {'f1': 0.8040567951318459} | {'accuracy': 0.8068} |
| 0.0003 | 246.0 | 1722 | 1.2763 | {'f1': 0.8051323175621492} | {'accuracy': 0.8056} |
| 0.0003 | 247.0 | 1729 | 1.2789 | {'f1': 0.810207336523126} | {'accuracy': 0.8096} |
| 0.0003 | 248.0 | 1736 | 1.2818 | {'f1': 0.8109393579072532} | {'accuracy': 0.8092} |
| 0.0003 | 249.0 | 1743 | 1.2847 | {'f1': 0.8138801261829653} | {'accuracy': 0.8112} |
| 0.0003 | 250.0 | 1750 | 1.2864 | {'f1': 0.8140267927501971} | {'accuracy': 0.8112} |
| 0.0003 | 251.0 | 1757 | 1.2869 | {'f1': 0.8140267927501971} | {'accuracy': 0.8112} |
| 0.0003 | 252.0 | 1764 | 1.2863 | {'f1': 0.8132649032767469} | {'accuracy': 0.8108} |
| 0.0003 | 253.0 | 1771 | 1.2859 | {'f1': 0.8117088607594937} | {'accuracy': 0.8096} |
| 0.0003 | 254.0 | 1778 | 1.2860 | {'f1': 0.811089108910891} | {'accuracy': 0.8092} |
| 0.0003 | 255.0 | 1785 | 1.2867 | {'f1': 0.81203007518797} | {'accuracy': 0.81} |
| 0.0003 | 256.0 | 1792 | 1.2884 | {'f1': 0.8132649032767469} | {'accuracy': 0.8108} |
| 0.0003 | 257.0 | 1799 | 1.2988 | {'f1': 0.8167252833137943} | {'accuracy': 0.8124} |
| 0.0003 | 258.0 | 1806 | 1.3067 | {'f1': 0.8163424124513619} | {'accuracy': 0.8112} |
| 0.0003 | 259.0 | 1813 | 1.2974 | {'f1': 0.8155111633372502} | {'accuracy': 0.8116} |
| 0.0003 | 260.0 | 1820 | 1.2927 | {'f1': 0.8144654088050315} | {'accuracy': 0.8112} |
| 0.0003 | 261.0 | 1827 | 1.2901 | {'f1': 0.8127962085308058} | {'accuracy': 0.8104} |
| 0.0003 | 262.0 | 1834 | 1.2891 | {'f1': 0.8126732673267326} | {'accuracy': 0.8108} |
| 0.0003 | 263.0 | 1841 | 1.2890 | {'f1': 0.8107893692978976} | {'accuracy': 0.8092} |
| 0.0003 | 264.0 | 1848 | 1.2912 | {'f1': 0.8127962085308058} | {'accuracy': 0.8104} |
| 0.0003 | 265.0 | 1855 | 1.2928 | {'f1': 0.8142011834319528} | {'accuracy': 0.8116} |
| 0.0003 | 266.0 | 1862 | 1.2935 | {'f1': 0.8138801261829653} | {'accuracy': 0.8112} |
| 0.0003 | 267.0 | 1869 | 1.2941 | {'f1': 0.814814814814815} | {'accuracy': 0.812} |
| 0.0003 | 268.0 | 1876 | 1.2942 | {'f1': 0.8138801261829653} | {'accuracy': 0.8112} |
| 0.0003 | 269.0 | 1883 | 1.2951 | {'f1': 0.8144938952343442} | {'accuracy': 0.8116} |
| 0.0003 | 270.0 | 1890 | 1.2983 | {'f1': 0.8141453831041258} | {'accuracy': 0.8108} |
| 0.0003 | 271.0 | 1897 | 1.3002 | {'f1': 0.8142913231252454} | {'accuracy': 0.8108} |
| 0.0003 | 272.0 | 1904 | 1.3017 | {'f1': 0.8156862745098038} | {'accuracy': 0.812} |
| 0.0003 | 273.0 | 1911 | 1.3045 | {'f1': 0.8161189358372457} | {'accuracy': 0.812} |
| 0.0003 | 274.0 | 1918 | 1.3077 | {'f1': 0.8175068386088317} | {'accuracy': 0.8132} |
| 0.0003 | 275.0 | 1925 | 1.3098 | {'f1': 0.8173302107728336} | {'accuracy': 0.8128} |
| 0.0003 | 276.0 | 1932 | 1.3145 | {'f1': 0.8163424124513619} | {'accuracy': 0.8112} |
| 0.0003 | 277.0 | 1939 | 1.3161 | {'f1': 0.8168028004667445} | {'accuracy': 0.8116} |
| 0.0003 | 278.0 | 1946 | 1.3159 | {'f1': 0.8163424124513619} | {'accuracy': 0.8112} |
| 0.0003 | 279.0 | 1953 | 1.3156 | {'f1': 0.8166601790579991} | {'accuracy': 0.8116} |
| 0.0003 | 280.0 | 1960 | 1.3118 | {'f1': 0.8170113148653921} | {'accuracy': 0.8124} |
| 0.0003 | 281.0 | 1967 | 1.3088 | {'f1': 0.8161189358372457} | {'accuracy': 0.812} |
| 0.0003 | 282.0 | 1974 | 1.3077 | {'f1': 0.8145825166601333} | {'accuracy': 0.8108} |
| 0.0003 | 283.0 | 1981 | 1.3072 | {'f1': 0.8149019607843137} | {'accuracy': 0.8112} |
| 0.0003 | 284.0 | 1988 | 1.3075 | {'f1': 0.8149019607843137} | {'accuracy': 0.8112} |
| 0.0003 | 285.0 | 1995 | 1.3084 | {'f1': 0.8149019607843137} | {'accuracy': 0.8112} |
| 0.0001 | 286.0 | 2002 | 1.3097 | {'f1': 0.8147277712495105} | {'accuracy': 0.8108} |
| 0.0001 | 287.0 | 2009 | 1.3106 | {'f1': 0.815655577299413} | {'accuracy': 0.8116} |
| 0.0001 | 288.0 | 2016 | 1.3076 | {'f1': 0.8150765606595994} | {'accuracy': 0.8116} |
| 0.0001 | 289.0 | 2023 | 1.3055 | {'f1': 0.8154269972451791} | {'accuracy': 0.8124} |
| 0.0001 | 290.0 | 2030 | 1.3025 | {'f1': 0.8145224940805051} | {'accuracy': 0.812} |
| 0.0001 | 291.0 | 2037 | 1.3139 | {'f1': 0.8165819319515056} | {'accuracy': 0.8124} |
| 0.0001 | 292.0 | 2044 | 1.3268 | {'f1': 0.8170542635658915} | {'accuracy': 0.8112} |
| 0.0001 | 293.0 | 2051 | 1.3310 | {'f1': 0.8170212765957446} | {'accuracy': 0.8108} |
| 0.0001 | 294.0 | 2058 | 1.3307 | {'f1': 0.8170212765957446} | {'accuracy': 0.8108} |
| 0.0001 | 295.0 | 2065 | 1.4449 | {'f1': 0.8125} | {'accuracy': 0.796} |
| 0.0001 | 296.0 | 2072 | 1.5353 | {'f1': 0.8086175942549373} | {'accuracy': 0.7868} |
| 0.0001 | 297.0 | 2079 | 1.4656 | {'f1': 0.8106530463334549} | {'accuracy': 0.7924} |
| 0.0001 | 298.0 | 2086 | 1.3036 | {'f1': 0.8156028368794326} | {'accuracy': 0.8128} |
| 0.0001 | 299.0 | 2093 | 1.2977 | {'f1': 0.8054410552349547} | {'accuracy': 0.8112} |
| 0.0001 | 300.0 | 2100 | 1.2972 | {'f1': 0.8068739770867429} | {'accuracy': 0.8112} |
| 0.0001 | 301.0 | 2107 | 1.2982 | {'f1': 0.810441767068273} | {'accuracy': 0.8112} |
| 0.0001 | 302.0 | 2114 | 1.3025 | {'f1': 0.8116288331342094} | {'accuracy': 0.8108} |
| 0.0001 | 303.0 | 2121 | 1.3063 | {'f1': 0.8142574257425743} | {'accuracy': 0.8124} |
| 0.0001 | 304.0 | 2128 | 1.3108 | {'f1': 0.8148440584287406} | {'accuracy': 0.8124} |
| 0.0001 | 305.0 | 2135 | 1.3120 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0001 | 306.0 | 2142 | 1.3152 | {'f1': 0.8146399055489965} | {'accuracy': 0.8116} |
| 0.0001 | 307.0 | 2149 | 1.3293 | {'f1': 0.8155339805825242} | {'accuracy': 0.81} |
| 0.0001 | 308.0 | 2156 | 1.3356 | {'f1': 0.8165314793356508} | {'accuracy': 0.81} |
| 0.0001 | 309.0 | 2163 | 1.3352 | {'f1': 0.8163896405102435} | {'accuracy': 0.81} |
| 0.0001 | 310.0 | 2170 | 1.3325 | {'f1': 0.8156771439658517} | {'accuracy': 0.81} |
| 0.0001 | 311.0 | 2177 | 1.3303 | {'f1': 0.815390594636611} | {'accuracy': 0.81} |
| 0.0001 | 312.0 | 2184 | 1.3272 | {'f1': 0.8160561184723305} | {'accuracy': 0.8112} |
| 0.0001 | 313.0 | 2191 | 1.3246 | {'f1': 0.8143806174286832} | {'accuracy': 0.81} |
| 0.0001 | 314.0 | 2198 | 1.3224 | {'f1': 0.8134796238244514} | {'accuracy': 0.8096} |
| 0.0001 | 315.0 | 2205 | 1.3203 | {'f1': 0.815251572327044} | {'accuracy': 0.812} |
| 0.0001 | 316.0 | 2212 | 1.3183 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0001 | 317.0 | 2219 | 1.3132 | {'f1': 0.8129952456418383} | {'accuracy': 0.8112} |
| 0.0001 | 318.0 | 2226 | 1.3111 | {'f1': 0.8127236580516899} | {'accuracy': 0.8116} |
| 0.0001 | 319.0 | 2233 | 1.3078 | {'f1': 0.8101164191087917} | {'accuracy': 0.8108} |
| 0.0001 | 320.0 | 2240 | 1.3076 | {'f1': 0.8096774193548387} | {'accuracy': 0.8112} |
| 0.0001 | 321.0 | 2247 | 1.3090 | {'f1': 0.8101164191087917} | {'accuracy': 0.8108} |
| 0.0001 | 322.0 | 2254 | 1.3433 | {'f1': 0.7892491467576792} | {'accuracy': 0.8024} |
| 0.0001 | 323.0 | 2261 | 1.4595 | {'f1': 0.7642058165548098} | {'accuracy': 0.7892} |
| 0.0001 | 324.0 | 2268 | 1.3247 | {'f1': 0.7968026924694994} | {'accuracy': 0.8068} |
| 0.0001 | 325.0 | 2275 | 1.3326 | {'f1': 0.8177570093457942} | {'accuracy': 0.8128} |
| 0.0001 | 326.0 | 2282 | 1.3992 | {'f1': 0.8167105758374106} | {'accuracy': 0.8052} |
| 0.0001 | 327.0 | 2289 | 1.4017 | {'f1': 0.8177376925967682} | {'accuracy': 0.806} |
| 0.0001 | 328.0 | 2296 | 1.3527 | {'f1': 0.8194070080862534} | {'accuracy': 0.8124} |
| 0.0001 | 329.0 | 2303 | 1.3316 | {'f1': 0.8175465838509317} | {'accuracy': 0.812} |
| 0.0001 | 330.0 | 2310 | 1.3199 | {'f1': 0.8155111633372502} | {'accuracy': 0.8116} |
| 0.0001 | 331.0 | 2317 | 1.3143 | {'f1': 0.8127709893575089} | {'accuracy': 0.81} |
| 0.0001 | 332.0 | 2324 | 1.3109 | {'f1': 0.8113879003558718} | {'accuracy': 0.8092} |
| 0.0001 | 333.0 | 2331 | 1.3092 | {'f1': 0.8114104595879558} | {'accuracy': 0.8096} |
| 0.0001 | 334.0 | 2338 | 1.3085 | {'f1': 0.8104678826328311} | {'accuracy': 0.8088} |
| 0.0001 | 335.0 | 2345 | 1.3083 | {'f1': 0.8107893692978976} | {'accuracy': 0.8092} |
| 0.0001 | 336.0 | 2352 | 1.3086 | {'f1': 0.8107893692978976} | {'accuracy': 0.8092} |
| 0.0001 | 337.0 | 2359 | 1.3096 | {'f1': 0.8109393579072532} | {'accuracy': 0.8092} |
| 0.0001 | 338.0 | 2366 | 1.3108 | {'f1': 0.8118811881188118} | {'accuracy': 0.81} |
| 0.0001 | 339.0 | 2373 | 1.3119 | {'f1': 0.812351543942993} | {'accuracy': 0.8104} |
| 0.0001 | 340.0 | 2380 | 1.3130 | {'f1': 0.8117088607594937} | {'accuracy': 0.8096} |
| 0.0001 | 341.0 | 2387 | 1.3141 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 342.0 | 2394 | 1.3154 | {'f1': 0.811216429699842} | {'accuracy': 0.8088} |
| 0.0001 | 343.0 | 2401 | 1.3151 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 344.0 | 2408 | 1.3154 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 345.0 | 2415 | 1.3156 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 346.0 | 2422 | 1.3157 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 347.0 | 2429 | 1.3158 | {'f1': 0.8115369419201897} | {'accuracy': 0.8092} |
| 0.0001 | 348.0 | 2436 | 1.3338 | {'f1': 0.8160561184723305} | {'accuracy': 0.8112} |
| 0.0001 | 349.0 | 2443 | 1.3439 | {'f1': 0.819062378922898} | {'accuracy': 0.8132} |
| 0.0001 | 350.0 | 2450 | 1.3474 | {'f1': 0.8188854489164088} | {'accuracy': 0.8128} |
| 0.0001 | 351.0 | 2457 | 1.3484 | {'f1': 0.8188854489164088} | {'accuracy': 0.8128} |
| 0.0001 | 352.0 | 2464 | 1.3478 | {'f1': 0.8188854489164088} | {'accuracy': 0.8128} |
| 0.0001 | 353.0 | 2471 | 1.3462 | {'f1': 0.8186046511627906} | {'accuracy': 0.8128} |
| 0.0001 | 354.0 | 2478 | 1.3432 | {'f1': 0.8183229813664596} | {'accuracy': 0.8128} |
| 0.0001 | 355.0 | 2485 | 1.3415 | {'f1': 0.8172628304821151} | {'accuracy': 0.812} |
| 0.0001 | 356.0 | 2492 | 1.3380 | {'f1': 0.8166601790579991} | {'accuracy': 0.8116} |
| 0.0001 | 357.0 | 2499 | 1.3354 | {'f1': 0.8165495706479313} | {'accuracy': 0.812} |
| 0.0011 | 358.0 | 2506 | 1.3370 | {'f1': 0.816374269005848} | {'accuracy': 0.8116} |
| 0.0011 | 359.0 | 2513 | 1.3384 | {'f1': 0.8172964550058435} | {'accuracy': 0.8124} |
| 0.0011 | 360.0 | 2520 | 1.3373 | {'f1': 0.8166926677067083} | {'accuracy': 0.812} |
| 0.0011 | 361.0 | 2527 | 1.3354 | {'f1': 0.8157689305230289} | {'accuracy': 0.8112} |
| 0.0011 | 362.0 | 2534 | 1.3336 | {'f1': 0.8153364632237872} | {'accuracy': 0.8112} |
| 0.0011 | 363.0 | 2541 | 1.3321 | {'f1': 0.8145825166601333} | {'accuracy': 0.8108} |
| 0.0011 | 364.0 | 2548 | 1.3280 | {'f1': 0.8149312377210216} | {'accuracy': 0.8116} |
| 0.0011 | 365.0 | 2555 | 1.3711 | {'f1': 0.819433817903596} | {'accuracy': 0.8112} |
| 0.0011 | 366.0 | 2562 | 1.4276 | {'f1': 0.8177083333333331} | {'accuracy': 0.804} |
| 0.0011 | 367.0 | 2569 | 1.4536 | {'f1': 0.8159645232815964} | {'accuracy': 0.8008} |
| 0.0011 | 368.0 | 2576 | 1.4590 | {'f1': 0.8161004431314622} | {'accuracy': 0.8008} |
| 0.0011 | 369.0 | 2583 | 1.3146 | {'f1': 0.8145224940805051} | {'accuracy': 0.812} |
| 0.0011 | 370.0 | 2590 | 1.3096 | {'f1': 0.8057851239669422} | {'accuracy': 0.812} |
| 0.0011 | 371.0 | 2597 | 1.3042 | {'f1': 0.8077080770807707} | {'accuracy': 0.8124} |
| 0.0011 | 372.0 | 2604 | 1.3011 | {'f1': 0.8080065359477124} | {'accuracy': 0.812} |
| 0.0011 | 373.0 | 2611 | 1.3000 | {'f1': 0.8090982940698618} | {'accuracy': 0.812} |
| 0.0011 | 374.0 | 2618 | 1.3001 | {'f1': 0.8127522195318806} | {'accuracy': 0.8144} |
| 0.0011 | 375.0 | 2625 | 1.3009 | {'f1': 0.8102893890675241} | {'accuracy': 0.8112} |
| 0.0011 | 376.0 | 2632 | 1.3019 | {'f1': 0.8102687525070197} | {'accuracy': 0.8108} |
| 0.0011 | 377.0 | 2639 | 1.3028 | {'f1': 0.8112} | {'accuracy': 0.8112} |
| 0.0011 | 378.0 | 2646 | 1.3038 | {'f1': 0.8119760479041915} | {'accuracy': 0.8116} |
| 0.0011 | 379.0 | 2653 | 1.3058 | {'f1': 0.8125746120175089} | {'accuracy': 0.8116} |
| 0.0011 | 380.0 | 2660 | 1.3096 | {'f1': 0.8134653465346535} | {'accuracy': 0.8116} |
| 0.0011 | 381.0 | 2667 | 1.3122 | {'f1': 0.8164232135807343} | {'accuracy': 0.814} |
| 0.0011 | 382.0 | 2674 | 1.3137 | {'f1': 0.8168902920284135} | {'accuracy': 0.8144} |
| 0.0011 | 383.0 | 2681 | 1.3156 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 384.0 | 2688 | 1.3162 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 385.0 | 2695 | 1.3165 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 386.0 | 2702 | 1.3168 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 387.0 | 2709 | 1.3169 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 388.0 | 2716 | 1.3166 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 389.0 | 2723 | 1.3166 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 390.0 | 2730 | 1.3168 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 391.0 | 2737 | 1.3165 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 392.0 | 2744 | 1.3168 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 393.0 | 2751 | 1.3172 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 394.0 | 2758 | 1.3173 | {'f1': 0.8170347003154574} | {'accuracy': 0.8144} |
| 0.0011 | 395.0 | 2765 | 1.3161 | {'f1': 0.8154879494271038} | {'accuracy': 0.8132} |
| 0.0011 | 396.0 | 2772 | 1.3156 | {'f1': 0.8148734177215189} | {'accuracy': 0.8128} |
| 0.0011 | 397.0 | 2779 | 1.3148 | {'f1': 0.8129952456418383} | {'accuracy': 0.8112} |
| 0.0011 | 398.0 | 2786 | 1.3146 | {'f1': 0.8129952456418383} | {'accuracy': 0.8112} |
| 0.0011 | 399.0 | 2793 | 1.3142 | {'f1': 0.8133174791914388} | {'accuracy': 0.8116} |
| 0.0011 | 400.0 | 2800 | 1.3146 | {'f1': 0.8129952456418383} | {'accuracy': 0.8112} |
| 0.0011 | 401.0 | 2807 | 1.3163 | {'f1': 0.8139350752177354} | {'accuracy': 0.812} |
| 0.0011 | 402.0 | 2814 | 1.3147 | {'f1': 0.8139627132090439} | {'accuracy': 0.8124} |
| 0.0011 | 403.0 | 2821 | 1.3137 | {'f1': 0.813195548489666} | {'accuracy': 0.812} |
| 0.0011 | 404.0 | 2828 | 1.3133 | {'f1': 0.8135188866799204} | {'accuracy': 0.8124} |
| 0.0011 | 405.0 | 2835 | 1.3132 | {'f1': 0.8135188866799204} | {'accuracy': 0.8124} |
| 0.0011 | 406.0 | 2842 | 1.3132 | {'f1': 0.8125746120175089} | {'accuracy': 0.8116} |
| 0.0011 | 407.0 | 2849 | 1.3132 | {'f1': 0.8121019108280254} | {'accuracy': 0.8112} |
| 0.0011 | 408.0 | 2856 | 1.3146 | {'f1': 0.8130469371519491} | {'accuracy': 0.812} |
| 0.0011 | 409.0 | 2863 | 1.3186 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0011 | 410.0 | 2870 | 1.3217 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0011 | 411.0 | 2877 | 1.3233 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 412.0 | 2884 | 1.3243 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 413.0 | 2891 | 1.3248 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 414.0 | 2898 | 1.3249 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 415.0 | 2905 | 1.3248 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 416.0 | 2912 | 1.3249 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 417.0 | 2919 | 1.3251 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 418.0 | 2926 | 1.3250 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 419.0 | 2933 | 1.3250 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 420.0 | 2940 | 1.3250 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 421.0 | 2947 | 1.3246 | {'f1': 0.8162460567823343} | {'accuracy': 0.8136} |
| 0.0011 | 422.0 | 2954 | 1.3244 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0011 | 423.0 | 2961 | 1.3242 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0011 | 424.0 | 2968 | 1.3245 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0011 | 425.0 | 2975 | 1.3256 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 426.0 | 2982 | 1.3260 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 427.0 | 2989 | 1.3261 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0011 | 428.0 | 2996 | 1.3264 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 429.0 | 3003 | 1.3265 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 430.0 | 3010 | 1.3268 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 431.0 | 3017 | 1.3265 | {'f1': 0.8162460567823343} | {'accuracy': 0.8136} |
| 0.0 | 432.0 | 3024 | 1.3260 | {'f1': 0.8161010260457774} | {'accuracy': 0.8136} |
| 0.0 | 433.0 | 3031 | 1.3259 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 434.0 | 3038 | 1.3260 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 435.0 | 3045 | 1.3262 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 436.0 | 3052 | 1.3257 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 437.0 | 3059 | 1.3255 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 438.0 | 3066 | 1.3250 | {'f1': 0.8154879494271038} | {'accuracy': 0.8132} |
| 0.0 | 439.0 | 3073 | 1.3247 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 440.0 | 3080 | 1.3245 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0 | 441.0 | 3087 | 1.3242 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0 | 442.0 | 3094 | 1.3243 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0 | 443.0 | 3101 | 1.3247 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0 | 444.0 | 3108 | 1.3250 | {'f1': 0.8144044321329641} | {'accuracy': 0.8124} |
| 0.0 | 445.0 | 3115 | 1.3254 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 446.0 | 3122 | 1.3254 | {'f1': 0.8148734177215189} | {'accuracy': 0.8128} |
| 0.0 | 447.0 | 3129 | 1.3257 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 448.0 | 3136 | 1.3258 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 449.0 | 3143 | 1.3260 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 450.0 | 3150 | 1.3264 | {'f1': 0.8153420324238829} | {'accuracy': 0.8132} |
| 0.0 | 451.0 | 3157 | 1.3270 | {'f1': 0.815955766192733} | {'accuracy': 0.8136} |
| 0.0 | 452.0 | 3164 | 1.3273 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 453.0 | 3171 | 1.3276 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 454.0 | 3178 | 1.3277 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 455.0 | 3185 | 1.3278 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 456.0 | 3192 | 1.3279 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 457.0 | 3199 | 1.3283 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 458.0 | 3206 | 1.3285 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 459.0 | 3213 | 1.3288 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 460.0 | 3220 | 1.3290 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 461.0 | 3227 | 1.3291 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 462.0 | 3234 | 1.3291 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 463.0 | 3241 | 1.3296 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 464.0 | 3248 | 1.3298 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 465.0 | 3255 | 1.3297 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 466.0 | 3262 | 1.3295 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 467.0 | 3269 | 1.3298 | {'f1': 0.8156336360047375} | {'accuracy': 0.8132} |
| 0.0 | 468.0 | 3276 | 1.3301 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 469.0 | 3283 | 1.3306 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 470.0 | 3290 | 1.3309 | {'f1': 0.8153117600631413} | {'accuracy': 0.8128} |
| 0.0 | 471.0 | 3297 | 1.3321 | {'f1': 0.8162460567823343} | {'accuracy': 0.8136} |
| 0.0 | 472.0 | 3304 | 1.3328 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 473.0 | 3311 | 1.3333 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 474.0 | 3318 | 1.3335 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 475.0 | 3325 | 1.3335 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 476.0 | 3332 | 1.3336 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 477.0 | 3339 | 1.3336 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 478.0 | 3346 | 1.3337 | {'f1': 0.8176447420244192} | {'accuracy': 0.8148} |
| 0.0 | 479.0 | 3353 | 1.3335 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 480.0 | 3360 | 1.3334 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 481.0 | 3367 | 1.3336 | {'f1': 0.8167126527394561} | {'accuracy': 0.814} |
| 0.0 | 482.0 | 3374 | 1.3336 | {'f1': 0.8171788810086682} | {'accuracy': 0.8144} |
| 0.0 | 483.0 | 3381 | 1.3534 | {'f1': 0.8176538908246225} | {'accuracy': 0.8116} |
| 0.0 | 484.0 | 3388 | 1.3670 | {'f1': 0.8195836545875097} | {'accuracy': 0.8128} |
| 0.0 | 485.0 | 3395 | 1.3735 | {'f1': 0.8201383551114528} | {'accuracy': 0.8128} |
| 0.0 | 486.0 | 3402 | 1.3764 | {'f1': 0.8216340621403913} | {'accuracy': 0.814} |
| 0.0 | 487.0 | 3409 | 1.3759 | {'f1': 0.8216340621403913} | {'accuracy': 0.814} |
| 0.0 | 488.0 | 3416 | 1.3750 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 489.0 | 3423 | 1.3743 | {'f1': 0.8207293666026871} | {'accuracy': 0.8132} |
| 0.0 | 490.0 | 3430 | 1.3739 | {'f1': 0.8207293666026871} | {'accuracy': 0.8132} |
| 0.0 | 491.0 | 3437 | 1.3746 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 492.0 | 3444 | 1.3754 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 493.0 | 3451 | 1.3755 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 494.0 | 3458 | 1.3754 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 495.0 | 3465 | 1.3753 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 496.0 | 3472 | 1.3751 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 497.0 | 3479 | 1.3749 | {'f1': 0.8211818879508825} | {'accuracy': 0.8136} |
| 0.0 | 498.0 | 3486 | 1.3746 | {'f1': 0.8207293666026871} | {'accuracy': 0.8132} |
| 0.0 | 499.0 | 3493 | 1.3743 | {'f1': 0.8207293666026871} | {'accuracy': 0.8132} |
| 0.0 | 500.0 | 3500 | 1.3742 | {'f1': 0.8207293666026871} | {'accuracy': 0.8132} |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lukekim420/qlora-koalpaca-polyglot-5.8b-sshsbamboobot
|
lukekim420
| 2023-10-30T18:39:39Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-5.8B",
"base_model:adapter:beomi/KoAlpaca-Polyglot-5.8B",
"region:us"
] | null | 2023-10-30T18:39:37Z |
---
library_name: peft
base_model: beomi/KoAlpaca-Polyglot-5.8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Kooten/Nethena-13B-8bpw-h8-exl2
|
Kooten
| 2023-10-30T18:32:54Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T16:02:00Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Nethena-13B](https://huggingface.co/NeverSleep/Nethena-13B)
8 BPW, Head bit set to 8
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## VRAM
My VRAM usage with 13B models are:
| Bits per weight | Context | VRAM |
|--|--|--|
| 8bpw | 8k | 22gb |
| 8bpw | 4k | 19gb |
| 6bpw | 8k | 19gb |
| 6bpw | 4k | 16gb |
| 4bpw | 8k | 16gb |
| 4bpw | 4k | 13gb |
| 3bpw | 8k | 15gb |
| 3bpw | 4k | 12gb |
I have rounded up, these arent exact numbers, this is also on a windows machine, they should be slightly lower on linux.
|
gstoica3/roberta-large-peft-mrpc
|
gstoica3
| 2023-10-30T18:31:34Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | 2023-10-30T18:31:33Z |
---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
tingchih/1030-1
|
tingchih
| 2023-10-30T18:30:52Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T17:43:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 1030-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1030-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3641 | 1.0 | 24828 | 1.3631 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
justinlevi/mistral-finetuned
|
justinlevi
| 2023-10-30T18:29:44Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-30T18:28:38Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
noble6/jokesru_enllama-falcon-7b
|
noble6
| 2023-10-30T18:28:31Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T19:38:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
yjlee1011/ncodeR_data_multilabel_8samples
|
yjlee1011
| 2023-10-30T18:27:27Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-10-30T18:07:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_multilabel_8samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_multilabel_8samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
uppara/myhouse
|
uppara
| 2023-10-30T18:25:55Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-30T18:21:03Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### myhouse Dreambooth model trained by uppara following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVR-21
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
vrx2/matscibert-QA
|
vrx2
| 2023-10-30T18:24:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model: m3rg-iitd/matscibert",
"dataset:squad",
"base_model:m3rg-iitd/matscibert",
"base_model:finetune:m3rg-iitd/matscibert",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-26T11:15:56Z |
---
base_model: m3rg-iitd/matscibert
tags:
- generated_from_trainer
- 'base_model: m3rg-iitd/matscibert'
datasets:
- squad
model-index:
- name: matscibert-QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# matscibert-QA
This is a just a soft demo and it aint working as intended yet
This model is a fine-tuned version of [m3rg-iitd/matscibert](https://huggingface.co/m3rg-iitd/matscibert) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1571 | 1.0 | 5564 | 1.1091 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cpu
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yjlee1011/ncodeR_data_multilabel_16samples
|
yjlee1011
| 2023-10-30T18:11:32Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-10-30T18:11:11Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_multilabel_16samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_multilabel_16samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
alessiodm/ppo-SnowballTarget
|
alessiodm
| 2023-10-30T18:05:27Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-10-30T18:05:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: alessiodm/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jahb57/test_trainer
|
jahb57
| 2023-10-30T17:59:44Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T17:58:43Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0617
- Accuracy: 0.584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 1.1204 | 0.515 |
| 1.2411 | 2.0 | 500 | 1.0231 | 0.57 |
| 1.2411 | 3.0 | 750 | 1.0617 | 0.584 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Ben141/LLM17
|
Ben141
| 2023-10-30T17:48:05Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-10-30T17:34:25Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: LLM17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM17
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sajjadamjad/bert-base-banking77-pt2
|
sajjadamjad
| 2023-10-30T17:46:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T16:41:42Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9282963964565724
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- F1: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1348 | 1.0 | 626 | 0.8122 | 0.8288 |
| 0.391 | 2.0 | 1252 | 0.3681 | 0.9219 |
| 0.1881 | 3.0 | 1878 | 0.3035 | 0.9283 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Usdt666/gpu2
|
Usdt666
| 2023-10-30T17:37:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-30T17:29:24Z |
<img src="https://raw.githubusercontent.com/leptonai/leptonai/main/assets/logo.svg" height=100>
# Lepton AI
**A Pythonic framework to simplify AI service building**
<a href="https://lepton.ai/">Homepage</a> •
<a href="https://dashboard.lepton.ai/playground">API Playground</a> •
<a href="https://github.com/leptonai/examples">Examples</a> •
<a href="https://lepton.ai/docs/">Documentation</a> •
<a href="https://lepton.ai/references">CLI References</a> •
<a href="https://twitter.com/leptonai">Twitter</a> •
<a href="https://leptonai.medium.com/">Blog</a>
The LeptonAI python library allows you to build an AI service from python code with ease. Key features include:
- A pythonic abstraction `Photon`, allowing you to convert research and modeling code into a service with a few lines of code.
- Simple abstractions to launch models like those on [HuggingFace](https://huggingface.co) in few lines of code.
- Prebuilt examples for common models such as Llama, SDXL, Whisper, and others.
- AI tailored batteries included such as autobatching, background jobs, etc.
- A client to automatically call your service like native Python functions.
- Pythonic configuration specs to be readily shipped in a cloud environment.
## Getting started with one-liner
Install the library with:
```shell
pip install -U leptonai
```
This installs the `leptonai` python library, as well as the commandline interface `lep`. You can then launch a HuggingFace model, say `gpt2`, in one line of code:
```python
lep photon run --name gpt2 --model hf:gpt2 --local
```
If you have access to the Llama2 model ([apply for access here](https://huggingface.co/meta-llama/Llama-2-7b)) and you have a reasonably sized GPU, you can launch it with:
```python
# hint: you can also write `-n` and `-m` for short
lep photon run -n llama2 -m hf:meta-llama/Llama-2-7b-chat-hf --local
```
(Be sure to use the `-hf` version for Llama2, which is compatible with huggingface pipelines.)
You can then access the service with:
```python
from leptonai.client import Client, local
c = Client(local(port=8080))
# Use the following to print the doc
print(c.run.__doc__)
print(c.run(inputs="I enjoy walking with my cute dog"))
```
Fully managed Llama2 models and CodeLlama models can be found in the [playground](https://dashboard.lepton.ai/playground).
Many standard HuggingFace pipelines are supported - find out more details in the [documentation](https://www.lepton.ai/docs/advanced/prebuilt_photons#hugging-face-photons). Not all HuggingFace models are supported though, as many of them contain custom code and are not standard pipelines. If you find a popular model you would like to support, please [open an issue or a PR](https://github.com/leptonai/leptonai/issues/new).
## Checking out more examples
You can find out more examples from the [examples repository](https://github.com/leptonai/examples). For example, launch the Stable Diffusion XL model with:
```shell
git clone git@github.com:leptonai/examples.git
cd examples
```
```python
lep photon run -n sdxl -m advanced/sdxl/sdxl.py --local
```
Once the service is running, you can access it with:
```python
from leptonai.client import Client, local
c = Client(local(port=8080))
img_content = c.run(prompt="a cat launching rocket", seed=1234)
with open("cat.png", "wb") as fid:
fid.write(img_content)
```
or access the mounted Gradio UI at [http://localhost:8080/ui](http://localhost:8080/ui). Check the [README file](https://github.com/leptonai/examples/blob/main/advanced/sdxl/README.md) for more details.
A fully managed SDXL is hosted at [https://dashboard.lepton.ai/playground/sdxl](https://dashboard.lepton.ai/playground/sdxl) with API access.
## Writing your own photons
Writing your own photon is simple: write a python Photon class and decorate functions with `@Photon.handler`. As long as your input and output are JSON serializable, you are good to go. For example, the following code launches a simple echo service:
```python
# my_photon.py
from leptonai.photon import Photon
class Echo(Photon):
@Photon.handler
def echo(self, inputs: str) -> str:
"""
A simple example to return the original input.
"""
return inputs
```
You can then launch the service with:
```shell
lep photon run -n echo -m my_photon.py --local
```
Then, you can use your service as follows:
```python
from leptonai.client import Client, local
c = Client(local(port=8080))
# will print available paths
print(c.paths())
# will print the doc for c.echo. You can also use `c.echo?` in Jupyter.
print(c.echo.__doc__)
# will actually call echo.
c.echo(inputs="hello world")
```
For more details, checkout the [documentation](https://lepton.ai/docs/) and the [examples](https://github.com/leptonai/examples).
## Contributing
Contributions and collaborations are welcome and highly appreciated. Please check out the [contributor guide](https://github.com/leptonai/leptonai/blob/main/CONTRIBUTING.md) for how to get involved.
## License
The Lepton AI python library is released under the Apache 2.0 license.
Developer Note: early development of LeptonAI was in a separate mono-repo, which is why you may see commits from the `leptonai/lepton` repo. We intend to use this open source repo as the source of truth going forward.
|
EduardoCam/mimuchacho0_o
|
EduardoCam
| 2023-10-30T17:32:21Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:EduardoCam/autotrain-data-brisnko",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T17:31:34Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- EduardoCam/autotrain-data-brisnko
co2_eq_emissions:
emissions: 0.4115384416022771
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 97847147059
- CO2 Emissions (in grams): 0.4115
## Validation Metrics
- Loss: 0.580
- Accuracy: 0.811
- Macro F1: 0.810
- Micro F1: 0.811
- Weighted F1: 0.814
- Macro Precision: 0.856
- Micro Precision: 0.811
- Weighted Precision: 0.847
- Macro Recall: 0.817
- Micro Recall: 0.811
- Weighted Recall: 0.811
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EduardoCam/autotrain-brisnko-97847147059
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("EduardoCam/autotrain-brisnko-97847147059", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("EduardoCam/autotrain-brisnko-97847147059", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Kooten/Nethena-13B-3bpw-h8-exl2
|
Kooten
| 2023-10-30T17:28:23Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T16:02:25Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Nethena-13B](https://huggingface.co/NeverSleep/Nethena-13B)
3 BPW, Head bit set to 8
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## VRAM
My VRAM usage with 13B models are:
| Bits per weight | Context | VRAM |
|--|--|--|
| 8bpw | 8k | 22gb |
| 8bpw | 4k | 19gb |
| 6bpw | 8k | 19gb |
| 6bpw | 4k | 16gb |
| 4bpw | 8k | 16gb |
| 4bpw | 4k | 13gb |
| 3bpw | 8k | 15gb |
| 3bpw | 4k | 12gb |
I have rounded up, these arent exact numbers, this is also on a windows machine, they should be slightly lower on linux.
|
Yntec/ChiliConCarne
|
Yntec
| 2023-10-30T17:24:31Z | 616 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-30T10:24:43Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Chili Con Carne
Model specialized in Food Photography.
Samples and prompts:

(Click for larger)
- Top Left: hamburger with melted cheese splashing on top of it, highly stylized, 4k, unreal engine 5 render, food art, food photography, realistic render, smoke, mist, dramatic lighting, cinematic lighting, rule of thirds, depth of field, cinematic bloom, art by
- Top Right: lemon icecream with mapple syrup and chocolate, highly stylized, 4k, unreal engine 5 render, food art, food photography, realistic render, smoke, mist, dramatic lighting, cinematic lighting, rule of thirds, depth of field, cinematic bloom, art by
- Bottom Left: pizza, raining cheese, roast jalapeños with tomato, highly stylized, 4k, unreal engine 5 render, food art, food photography, realistic render, smoke, mist, dramatic lighting, cinematic lighting, rule of thirds, depth of field, cinematic bloom, art by
- Bottom Right: Chili con Carne, classic ground beef, beans, meatballs, highly stylized, 4k, unreal engine 5 render, food art, food photography, realistic render, smoke, mist, dramatic lighting, cinematic lighting, rule of thirds, depth of field, cinematic bloom, art by
|
MU-NLPC/calcformer-t5-xl
|
MU-NLPC
| 2023-10-30T17:13:14Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:MU-NLPC/Calc-gsm8k",
"dataset:MU-NLPC/Calc-aqua_rat",
"dataset:MU-NLPC/Calc-math_qa",
"dataset:MU-NLPC/Calc-ape210k",
"arxiv:2305.15017",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T15:22:12Z |
---
datasets:
- MU-NLPC/Calc-gsm8k
- MU-NLPC/Calc-aqua_rat
- MU-NLPC/Calc-math_qa
- MU-NLPC/Calc-ape210k
metrics:
- exact_match
- rouge
license: apache-2.0
language:
- en
---
# Model Card for calcformer-t5-xl
This model generates reasoning chains over mathematical questions while **using an external tool: Sympy calculator**.
## Model Description
With the idea to offload the symbolic computation from the stochastic language model,
we train this model to utilize a calculator **for all applicable numeric operations**.
This is achieved by training the model to construct calls to the tool's API in this format:
```html
<gadget id="calculator">100/2</gadget> <output>50</output>
```
where `<gadget>` segment triggers a call of the tool,
which is subsequently served by extending model's decoder input context by adding the output of the tool within the `<output>` segment.
- **Developed by:** Calcformer team
- **Model type:** Autoregressive Encoder-Decoder
- **Language(s):** en
- **Finetuned from:** t5-xl
## Sources
- **Repository:** <https://github.com/prompteus/calc-x>
- **Paper:** <https://arxiv.org/abs/2305.15017>
- [**Calcformer model family on HF**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5)
- [**Calc-X dataset collection on HF**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483)
## Usage
Additionally to conventional generation, using Tool-augmented generation requires
(1) implementation of the tool(s) and
(2) a customization of `generate()` method augmenting input context on-demand with the outputs of the tools.
You can find these two components implemented in the attached **gadgets/model.py** and **gadgets/gadget.py** in this model's repo
and the project's [home repo](https://github.com/prompteus/calc-x).
After adding these two scripts to your directory, you can use the model as follows:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
from gadgets.model import gadget_assisted_model
from gadgets.gadget import Calculator
GadgetAssistedT5 = gadget_assisted_model(T5ForConditionalGeneration)
model_name = "MU-NLPC/calcformer-t5-xl"
model = GadgetAssistedT5.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
model.prepare_for_generate(tokenizer,
enabled_gadgets=[Calculator()],
default_max_tokens=512)
query = """
The profit from a business transaction is shared among 2 business partners,
Mike and Johnson in the ratio 2:5 respectively.
If Johnson got $2500, how much will Mike have
after spending some of his share on a shirt that costs $200?
"""
inputs = tokenizer(query, return_tensors="pt")
output_ids = model.generate(**inputs)
tokenizer.decode(output_ids[0], spaces_between_special_tokens=False)
```
This returns:
```html
According to the ratio, for every 5 parts that Johnson gets, Mike gets 2 parts Since Johnson got $2500,
each part is therefore $2500/5 = $<gadget id="calculator">2500/5</gadget><output>500</output> 500
Mike will get 2*$500 = $<gadget id="calculator">2*500</gadget><output>1_000</output> 1000
After buying the shirt he will have $1000-$200 = $<gadget id="calculator">1000-200</gadget><output>800</output> 800 left.
Final result is<result>800</result></s>
```
## Out-of-Scope Usage
Note that given the limited scope of the exercises' complexity in the training, this model will not work well for tasks requiring
more complex algebraic operations, including equations, variables and operations outside the scope of (+-*/).
## Training
This model was trained on [Calc-X](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483), a collection of math problem datasets which we converted into CoT with calculator interactions.
We used a standard auto-regressive transformer training, i.e. a conditional next-token prediction with cross-entropy loss. For more detail about data, training or evaluation, see the [Calc-X and Calcformers paper](https://arxiv.org/abs/2305.15017).
## Cite
Please cite the [Calcformers paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
|
MU-NLPC/calcformer-flan-xl
|
MU-NLPC
| 2023-10-30T17:13:00Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:MU-NLPC/Calc-gsm8k",
"dataset:MU-NLPC/Calc-aqua_rat",
"dataset:MU-NLPC/Calc-math_qa",
"dataset:MU-NLPC/Calc-ape210k",
"arxiv:2305.15017",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T14:27:00Z |
---
datasets:
- MU-NLPC/Calc-gsm8k
- MU-NLPC/Calc-aqua_rat
- MU-NLPC/Calc-math_qa
- MU-NLPC/Calc-ape210k
metrics:
- exact_match
- rouge
license: apache-2.0
language:
- en
---
# Model Card for calcformer-flan-xl
This model generates reasoning chains over mathematical questions while **using an external tool: Sympy calculator**.
## Model Description
With the idea to offload the symbolic computation from the stochastic language model,
we train this model to utilize a calculator **for all applicable numeric operations**.
This is achieved by training the model to construct calls to the tool's API in this format:
```html
<gadget id="calculator">100/2</gadget> <output>50</output>
```
where `<gadget>` segment triggers a call of the tool,
which is subsequently served by extending model's decoder input context by adding the output of the tool within the `<output>` segment.
- **Developed by:** Calcformer team
- **Model type:** Autoregressive Encoder-Decoder
- **Language(s):** en
- **Finetuned from:** google/flan-t5-xl
## Sources
- **Repository:** <https://github.com/prompteus/calc-x>
- **Paper:** <https://arxiv.org/abs/2305.15017>
- [**Calcformer model family on HF**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5)
- [**Calc-X dataset collection on HF**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483)
## Usage
Additionally to conventional generation, using Tool-augmented generation requires
(1) implementation of the tool(s) and
(2) a customization of `generate()` method augmenting input context on-demand with the outputs of the tools.
You can find these two components implemented in the attached **gadgets/model.py** and **gadgets/gadget.py** in this model's repo
and the project's [home repo](https://github.com/prompteus/calc-x).
After adding these two scripts to your directory, you can use the model as follows:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
from gadgets.model import gadget_assisted_model
from gadgets.gadget import Calculator
GadgetAssistedT5 = gadget_assisted_model(T5ForConditionalGeneration)
model_name = "MU-NLPC/calcformer-flan-xl"
model = GadgetAssistedT5.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
model.prepare_for_generate(tokenizer,
enabled_gadgets=[Calculator()],
default_max_tokens=512)
query = """
The profit from a business transaction is shared among 2 business partners,
Mike and Johnson in the ratio 2:5 respectively.
If Johnson got $2500, how much will Mike have
after spending some of his share on a shirt that costs $200?
"""
inputs = tokenizer(query, return_tensors="pt")
output_ids = model.generate(**inputs)
tokenizer.decode(output_ids[0], spaces_between_special_tokens=False)
```
This returns:
```html
According to the ratio, for every 5 parts that Johnson gets, Mike gets 2 parts Since Johnson got $2500,
each part is therefore $2500/5 = $<gadget id="calculator">2500/5</gadget><output>500</output> 500
Mike will get 2*$500 = $<gadget id="calculator">2*500</gadget><output>1_000</output> 1000
After buying the shirt he will have $1000-$200 = $<gadget id="calculator">1000-200</gadget><output>800</output> 800 left.
Final result is<result>800</result></s>
```
## Out-of-Scope Usage
Note that given the limited scope of the exercises' complexity in the training, this model will not work well for tasks requiring
more complex algebraic operations, including equations, variables and operations outside the scope of (+-*/).
## Training
This model was trained on [Calc-X](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483), a collection of math problem datasets which we converted into CoT with calculator interactions.
We used a standard auto-regressive transformer training, i.e. a conditional next-token prediction with cross-entropy loss. For more detail about data, training or evaluation, see the [Calc-X and Calcformers paper](https://arxiv.org/abs/2305.15017).
## Cite
Please cite the [Calcformers paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
|
sunyijia97/lora-trained-xl-colab-yuan-v1
|
sunyijia97
| 2023-10-30T16:58:58Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-30T08:28:30Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of yu4nyu4n
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sunyijia97/lora-trained-xl-colab-yuan-v1
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of yu4nyu4n using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
auro736/roberta-large-tweet-fid-news-TRC
|
auro736
| 2023-10-30T16:55:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T16:04:36Z |
---
license: mit
language:
- en
pipeline_tag: text-classification
---
|
REDRABBIT0314/SONALITEORCA
|
REDRABBIT0314
| 2023-10-30T16:51:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-30T16:50:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
SkunkworksAI/BakLLaVA_v1_pretrained
|
SkunkworksAI
| 2023-10-30T16:50:28Z | 8 | 8 |
transformers
|
[
"transformers",
"llava_mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-11T04:24:06Z |
Mistral LLaVA pretrained projector.
|
auro736/roberta-large-tweet-fid-TRC
|
auro736
| 2023-10-30T16:49:56Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2205.10726",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-23T19:12:59Z |
---
license: mit
language:
- en
pipeline_tag: text-classification
---
## RoBERTa-large-tweet-fid-TRC
This is a [RoBERTa-large](https://huggingface.co/roberta-large) model trained on the [Tweet-FID](https://arxiv.org/abs/2205.10726) dataset (*"TWEET-FID: An Annotated Dataset for Multiple Foodborne Illness Detection Tasks", Ruofan Hu et al, 2022* ) which is a collection of Twitter to detect incidents of foodborne illnesses.
The model is enriched with a binary classification head to perform the custom task called Text Relevance Classification (TRC).
The objective is to determine whether a given text is related to a food risk, identified as *class_1*, or not, *class_0*.
|
aaditya/whisper-small-hi
|
aaditya
| 2023-10-30T16:49:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-26T13:33:09Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2516
- eval_wer: 100.1206
- eval_runtime: 89.9036
- eval_samples_per_second: 1.112
- eval_steps_per_second: 0.145
- epoch: 0.86
- step: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
REDRABBIT0314/SONALITE
|
REDRABBIT0314
| 2023-10-30T16:46:41Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-30T08:20:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
ayoub999/LayoutLMv3_5_entities_filtred_12
|
ayoub999
| 2023-10-30T16:34:58Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-30T15:26:57Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: LayoutLMv3_5_entities_filtred_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LayoutLMv3_5_entities_filtred_12
This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- Precision: 0.9474
- Recall: 0.9474
- F1: 0.9474
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 50.0 | 100 | 0.1150 | 0.9 | 0.9474 | 0.9231 | 0.9784 |
| No log | 100.0 | 200 | 0.1241 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| No log | 150.0 | 300 | 0.1328 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| No log | 200.0 | 400 | 0.1954 | 0.9 | 0.9474 | 0.9231 | 0.9784 |
| 0.0457 | 250.0 | 500 | 0.1845 | 0.8571 | 0.9474 | 0.9 | 0.9712 |
| 0.0457 | 300.0 | 600 | 0.0843 | 1.0 | 0.9474 | 0.9730 | 0.9928 |
| 0.0457 | 350.0 | 700 | 0.0896 | 1.0 | 0.9474 | 0.9730 | 0.9928 |
| 0.0457 | 400.0 | 800 | 0.0947 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0457 | 450.0 | 900 | 0.1026 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0005 | 500.0 | 1000 | 0.1118 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0005 | 550.0 | 1100 | 0.1196 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0005 | 600.0 | 1200 | 0.1257 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0005 | 650.0 | 1300 | 0.1297 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0005 | 700.0 | 1400 | 0.1334 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 750.0 | 1500 | 0.1360 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 800.0 | 1600 | 0.1381 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 850.0 | 1700 | 0.1389 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 900.0 | 1800 | 0.1396 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 950.0 | 1900 | 0.1402 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
| 0.0002 | 1000.0 | 2000 | 0.1405 | 0.9474 | 0.9474 | 0.9474 | 0.9856 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
stablediffusionapi/john-smith
|
stablediffusionapi
| 2023-10-30T16:34:29Z | 29 | 1 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-30T16:32:10Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# John Smith API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "john-smith"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/john-smith)
Model link: [View model](https://stablediffusionapi.com/models/john-smith)
Credits: [View credits](https://civitai.com/?query=John%20Smith)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "john-smith",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
yufengzheng/monster_toy
|
yufengzheng
| 2023-10-30T16:34:15Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T16:26:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks monster toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/monster_toy
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks monster toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
auro736/deberta-v3-large-tweet-fid-incidents-EMD
|
auro736
| 2023-10-30T16:30:23Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-30T16:27:16Z |
---
license: mit
language:
- en
pipeline_tag: token-classification
---
|
dell-research-harvard/wire-clustering-na
|
dell-research-harvard
| 2023-10-30T16:21:22Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-30T16:21:16Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# dell-research-harvard/wire-clustering-na
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('dell-research-harvard/wire-clustering-na')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=dell-research-harvard/wire-clustering-na)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2311 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`amended_sbert_fns.OnlineContrastiveLoss_wandb`
Parameters of the fit()-Method:
```
{
"epochs": 16,
"evaluation_steps": 112,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36976,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
auro736/deberta-v3-large-tweet-fid-news-TRC
|
auro736
| 2023-10-30T16:15:43Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T16:12:37Z |
---
license: mit
language:
- en
pipeline_tag: text-classification
---
|
tingchih/1030
|
tingchih
| 2023-10-30T16:12:05Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T15:26:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: '1030'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1030
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3768 | 1.0 | 35688 | 1.3910 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
auro736/xlm-roberta-large-tweet-fid-news-TRC
|
auro736
| 2023-10-30T16:11:11Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T16:08:31Z |
---
license: mit
language:
- en
pipeline_tag: text-classification
---
|
rlmjy/biogpt_heart
|
rlmjy
| 2023-10-30T16:07:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-30T16:07:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
stabilityai/stable-diffusion-xl-base-1.0
|
stabilityai
| 2023-10-30T16:03:47Z | 2,803,847 | 6,272 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-07-25T13:25:51Z |
---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.intel import OVStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.onnxruntime import ORTStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
Zedge/sdxl-base
|
Zedge
| 2023-10-30T16:03:47Z | 17 | 0 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-03-04T09:30:03Z |
---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.intel import OVStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.onnxruntime import ORTStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
yufengzheng/poop_emoji
|
yufengzheng
| 2023-10-30T16:01:27Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T15:53:38Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks poop emoji
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/poop_emoji
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks poop emoji using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
FelipeC/NLP_Course_Part_1
|
FelipeC
| 2023-10-30T15:59:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-26T19:30:35Z |
---
license: apache-2.0
datasets:
- glue
- sst2
language:
- en
---
# NLP_Course_Part_1
NLP_Course_Part_1 is a transformer model, a byproduct of Part 1 HuggingFace NLP Course. Although it is functional, it has only been created for my learning purposes.
|
yufengzheng/dog8
|
yufengzheng
| 2023-10-30T15:52:58Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T15:45:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/dog8
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
yufengzheng/dog3
|
yufengzheng
| 2023-10-30T15:36:42Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T15:29:11Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/dog3
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
mathildeparlo/ar_base_model
|
mathildeparlo
| 2023-10-30T15:28:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T12:55:27Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ar_base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar_base_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4166
- Accuracy: 0.8070
- F1: 0.8142
- Precision: 0.7852
- Recall: 0.8454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4275 | 1.0 | 1850 | 0.4166 | 0.8070 | 0.8142 | 0.7852 | 0.8454 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
yufengzheng/dog
|
yufengzheng
| 2023-10-30T15:28:28Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T15:21:07Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/dog
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
yufengzheng/dog7
|
yufengzheng
| 2023-10-30T15:20:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T15:12:50Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/dog7
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
hdparmar/tradfusion-v2
|
hdparmar
| 2023-10-30T15:12:40Z | 9 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"text-to-image",
"diffusion-models-class",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-27T11:38:27Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- text-to-image
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Fine-tuned Stable Diffusion Model on Irish Traditional Tunes Spectrograms
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('hdparmar/tradfusion-v2')
image = pipeline().images[0]
image
```
|
yufengzheng/dog2
|
yufengzheng
| 2023-10-30T15:03:57Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-30T14:45:02Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yufengzheng/dog2
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
NeverSleep/Nethena-13B-GGUF
|
NeverSleep
| 2023-10-30T15:03:41Z | 17 | 6 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-29T18:36:26Z |
---
license: cc-by-nc-4.0
---

# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Nethena-13B model. Use Alpaca format. Suitable for RP, ERP and general stuff.
What would happen if we combine all of out best models? Well.. here it is, the holy grail: **Echidna v0.3** + **Athena v3** + **Nete**
This model also has a 20b version, you can check it out right [here](https://huggingface.co/NeverSleep/Nethena-20B).
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains GGUF files of Nethena-13B.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-13B)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!--[exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-13B-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- NeverSleep/Echidna-13b-v0.3
- IkariDev/Athena-v3
- Undi95/Nete-13B
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
doa12/furniture_use_data_finetuning
|
doa12
| 2023-10-30T14:52:37Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-30T04:48:55Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: furniture_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furniture_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
brinda9468/videomae-base-finetuned-ucf101-subset
|
brinda9468
| 2023-10-30T14:44:27Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-10-25T14:53:08Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0669
- Accuracy: 0.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.5 | 4 | 2.0845 | 0.125 |
| No log | 1.5 | 8 | 2.0850 | 0.125 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mathildeparlo/ben_specific_model
|
mathildeparlo
| 2023-10-30T14:42:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sagorsarker/mbert-bengali-tydiqa-qa",
"base_model:finetune:sagorsarker/mbert-bengali-tydiqa-qa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T14:04:53Z |
---
license: mit
base_model: sagorsarker/mbert-bengali-tydiqa-qa
tags:
- generated_from_trainer
model-index:
- name: ben_specific_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ben_specific_model
This model is a fine-tuned version of [sagorsarker/mbert-bengali-tydiqa-qa](https://huggingface.co/sagorsarker/mbert-bengali-tydiqa-qa) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 299 | 0.3213 | 0.8705 | 0.8807 | 0.8168 | 0.9554 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Henk717/echidna-tiefigther-25
|
Henk717
| 2023-10-30T14:36:49Z | 15 | 8 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T12:36:10Z |
---
license: cc-by-nc-4.0
---
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: NeverSleep/Echidna-13b-v0.3
parameters:
weight: 1.0
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.25
dtype: float16
```
|
Henk717/echidna-tiefigther-25-gguf
|
Henk717
| 2023-10-30T14:36:12Z | 83 | 3 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-30T13:09:43Z |
---
license: cc-by-nc-4.0
---
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: NeverSleep/Echidna-13b-v0.3
parameters:
weight: 1.0
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.25
dtype: float16
```
|
predictia/convswin2sr_mediterranean
|
predictia
| 2023-10-30T14:31:01Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"conv_swin2sr",
"climate",
"super-resolution",
"image-to-image",
"es",
"en",
"dataset:openclimatefix/era5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-image
| 2023-09-19T07:48:42Z |
---
license: apache-2.0
datasets:
- openclimatefix/era5
language:
- es
- en
metrics:
- mse
library_name: transformers
pipeline_tag: image-to-image
tags:
- climate
- transformers
- super-resolution
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by
downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally,
an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained,
a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics,
disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes.
This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing
the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative. The model **ConvSwin2SR** is released in Apache 2.0, making it usable without
restrictions anywhere.
# Table of Contents
- [Model Card for Europe Reanalysis Super Resolution](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Technical Specifications](#technical-specifications-optional)
- [Model Architecture](#model-architecture)
- [Components](#components)
- [Configuration details](#configuration-details)
- [Loss function](#loss-function)
- [Computing Infrastructure](#computing-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Authors](#authors)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
We present the ConvSwin2SR tranformer, a vision model for down-scaling (from 0.25º to 0.05º) regional reanalysis grids in the mediterranean area.
- **Developed by:** A team of Predictia Intelligent Data Solutions S.L. & Instituto de Fisica de Cantabria (IFCA)
- **Model type:** Vision model
- **Language(s) (NLP):** en, es
- **License:** Apache-2.0
- **Resources for more information:** More information needed
- [GitHub Repo](https://github.com/ECMWFCode4Earth/DeepR)
# Uses
## Direct Use
The primary use of the ConvSwin2SR transformer is to enhance the spatial resolution in the Mediterranean area of global reanalysis grids using a regional reanalysis grid
as groundtruth. This enhancement is crucial for more precise climate studies, which can aid in better decision-making for various stakeholders including policymakers,
researchers, and weather-dependent industries like agriculture, energy, and transportation.
## Out-of-Scope Use
The model is specifically designed for downscaling ERA5 reanalysis grids to the CERRA regional reanalysis grid and may not perform well or provide accurate results
for other types of geospatial data or geographical regions.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf)
and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes
across protected classes; identity characteristics; and sensitive, social, and occupational groups.
# Training Details
## Training Data
The datasets that are mainly used in the project can be found in the following Copernicus Climate Data Store catalogue entries:
- [ERA5 hourly data on single levels from 1940 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels?tab=overview)
- [CERRA sub-daily regional reanalysis data for Europe on single levels from 1984 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-cerra-single-levels?tab=overview)
1. Input low-resolution grids (ERA5):
The input grids are structured as a 3D array with dimensions of (time, 60, 44), where 60 and 44 are the number of grid points along the longitude and latitude axes,
respectively. Geographically, these grids cover a longitude range from -8.35 to 6.6 and a latitude range from 46.45 to 35.50.
This implies that the data covers a region extending from a westernmost point at longitude -8.35 to an easternmost point at longitude 6.6, and from a
northernmost point at latitude 46.45 to a southernmost point at latitude 35.50.
2. Target high-resolution grids (CERRA):
They are represented as a 3D array with larger dimensions of (time, 240, 160), indicating a finer grid resolution compared to the input grids. Here, 240 and 160 are
the number of grid points along the longitude and latitude axes, respectively. The geographical coverage for these high-resolution grids is defined by a longitude
range from -6.85 to 5.1 and a latitude range from 44.95 to 37. This region extends from a westernmost point at longitude -6.85 to an easternmost point at longitude 5.1,
and from a northernmost point at latitude 44.95 to a southernmost point at latitude 37.

The dataset's temporal division is structured to optimize model training and subsequent per-epoch validation.
The training duration spans 29 years, commencing in January 1985 and culminating in December 2013.
Sequentially, the validation phase begins, covering the period from January 2014 to December 2017. This 4-year interval is solely dedicated to evaluating the model's
aptitude on data it hasn't been exposed to during training. This separation ensures the model's robustness and its capability to make dependable predictions for the
validation period.
## Training Procedure
### Preprocessing
The preprocessing of climate datasets ERA5 and CERRA, extracted from the Climate Data Store (CDS), is a critical step before their utilization in training models.
This section defines the preprocessing steps undertaken to homogenize these datasets into a common format. The steps include unit standardization, coordinate system
rectification, and grid interpolation. The methodology employed in each step is discussed comprehensively in the following paragraphs:
- Unit Standardization: A preliminary step in the preprocessing pipeline involved the standardization of units across both datasets.
This was imperative to ensure a uniform unit system, facilitating a seamless integration of the datasets in later stages.
- Coordinate System Rectification: The coordinate system of the datasets was rectified to ensure a coherent representation of geographical information.
Specifically, the coordinates and dimensions were renamed to a standardized format with longitude (lon) and latitude (lat) as designated names.
The longitude values were adjusted to range from -180 to 180 instead of the initial 0 to 360 range, while latitude values were ordered in ascending order,
thereby aligning with conventional geographical coordinate systems.
- Grid Interpolation: The ERA5 dataset is structured on a regular grid with a spatial resolution of 0.25º, whereas the CERRA dataset inhabits a curvilinear grid with
a Lambert Conformal projection of higher spatial resolution (0.05º). To overcome this disparity in the grid system, a grid interpolation procedure is performed.
This step is crucial to align the datasets onto a common format, a regular grid (with different spatial resolutions), thereby ensuring consistency in spatial
representation. The interpolation transformed the CERRA dataset to match the regular grid structure of the ERA5 dataset, keeping its initial spatial resolution
of 0.05º (5.5 km).
### Speeds, Sizes, Times
- Training time: The training duration for the ConvSwin2SR model is notably extensive, clocking in at 3,648 days to complete a total of 100 epochs with
a batch size of 2 for a total number of batches equal to ~43000.
- Model size: The ConvSwin2SR model is a robust machine learning model boasting a total of 12,383,377 parameters.
This size reflects a substantial capacity for learning and generalizing complex relationships within the data, enabling the model to
effectively upscale lower-resolution reanalysis grids to higher-resolution versions.
- Inference speed: The ConvSwin2SR model demonstrates a commendable inference speed, particularly when handling a substantial batch of samples.
Specifically, when tasked with downscaling 248 samples, which is synonymous with processing data for an entire month at 3-hour intervals,
the model completes the operation in a mere 21 seconds. This level of efficiency is observed in a local computing environment outfitted with 16GB of
RAM and 4GB of GPU memory.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
In terms of spatial dimensions, both the input grids from ERA5 and the target high-resolution grids from CERRA remain consistent throughout the training and testing phases.
This spatial consistency ensures that the model is evaluated under the same geographic conditions as it was trained, allowing for a direct comparison of its performance
across different temporal segments.
The testing data samples correspond to the three-year period from 2018 to 2020, inclusive. This segment is crucial for assessing the model's real-world applicability and
its performance on the most recent data points, ensuring its relevance and reliability in current and future scenarios.
## Results
In our evaluation, the proposed model displayed a significant enhancement over the established baseline, which employs bicubic interpolation for the same task.
Specifically, our model achieved a noteworthy 34.93% reduction in Mean Absolute Error (MAE), a metric indicative of the average magnitude of errors between
predicted and actual values. Furthermore, there was a near 30% improvement in the Root Mean Square Error (RMSE), which measures the square root of the average
of squared differences between predictions and actual values.
These metrics not only underscore the model's capability to predict with greater precision but also emphasize its reduced propensity for errors.
In comparison to the bicubic interpolation baseline, our model's superior predictive accuracy is evident, positioning it as a more reliable tool for this task.
- Mean absolute error (MAE):

- Root mean squared error (RMSE):

# Technical Specifications
## Model Architecture
Our model's design is deeply rooted in the Swin2 architecture, specifically tailored for Super Resolution (SR) tasks.
We've harnessed the [transformers library](https://github.com/huggingface/transformers) to streamline and simplify the model's design.

### Components
- **Transformers Component**: Central to our model is the [transformers.Swin2SRModel](https://huggingface.co/docs/transformers/model_doc/swin2sr#transformers.Swin2SRModel). This component amplifies the spatial resolution of its inputs by a factor of 8. Notably, Swin2SR exclusively supports upscaling ratios that are powers of 2.
- **Convolutional Neural Network (CNN) Component**: Given that our actual upscale ratio is approximately 5 and the designated output shape is (160, 240),
we've integrated a CNN. This serves as a preprocessing unit, transforming inputs into (20, 30) feature maps suitable for the Swin2SRModel.
The underlying objective of this network is to master the residuals stemming from bicubic interpolation.
### Configuration Details
For those inclined towards the intricacies of the model, the specific parameters governing its behavior are meticulously detailed in the
[config.json](https://huggingface.co/predictia/convswin2sr_mediterranean/blob/main/config.json).
### Loss function
The Swin2 transformer optimizes its parameters using a composite loss function that aggregates multiple L1 loss terms to enhance its predictive
accuracy across different resolutions and representations:
1. **Primary Predictions Loss**:
- This term computes the L1 loss between the primary model predictions and the reference values. It ensures that the transformer's
outputs closely match the ground truth.
2. **Downsampled Predictions Loss**:
- This term calculates the L1 loss between the downsampled versions of the predictions and the reference values. By incorporating this term,
the model is incentivized to preserve the underlying relations between both spatial resolutions. The references and predictions are upscaled
by average pooling by a factor of x5 to match the source resolution. Although this loss term could be (technically) computed with respect
to the low-resolution sample, the upscaled reference values are considered, due to the fact that the average pooling used for upscaling does
not represent the true relationship between both datasets considered.
3. **Blurred Predictions Loss**:
- To ensure the model's robustness against small perturbations and noise, this term evaluates the L1 loss between blurred versions of the
predictions and the references. This encourages the model to produce predictions that maintain accuracy even under slight modifications
in the data representation. On the other hand, it can smooth the prediction field too much, so it is a term whose use should be studied
before including it in your model. To produce the blurred values, a gaussian kernel of size 5 is applied.
By combining these loss terms, the ConvSwin2SR is trained to produce realistic predictions.
## Computing Infrastructure
Leveraging GPUs in deep learning initiatives greatly amplifies the pace of model training and inference. This computational edge not only diminishes the total
computational duration but also equips us to proficiently navigate complex tasks and extensive datasets.
Our profound gratitude extends to our collaborative partners, whose invaluable contribution and support have been cornerstones in the fruition of this project.
Their substantial inputs have immensely propelled our research and developmental strides.
- **AI4EOSC**: Representing "Artificial Intelligence for the European Open Science Cloud," AI4EOSC functions under the aegis of the European Open Science Cloud (EOSC).
Initiated by the European Union, EOSC endeavors to orchestrate a cohesive platform for research data and services. AI4EOSC, a distinct arm within EOSC, concentrates
on embedding and leveraging artificial intelligence (AI) techniques within the open science domain.
- **European Weather Cloud**: Serving as a cloud-centric hub, this platform catalyzes collective efforts in meteorological application design and operations
throughout Europe. Its offerings are manifold, ranging from disseminating weather forecast data to proffering computational prowess, expert counsel, and
consistent support.
### Hardware Specifications
Our endeavor harnesses the capabilities of two virtual machines (VMs), each embedded with a dedicated GPU. One VM is equipped with a 16GB GPU, while its counterpart
is equipped with an even potent 20GB GPU. This strategic hardware alignment proficiently caters to diverse computational needs, spanning data orchestration to model
fine-tuning and evaluation, ensuring the seamless flow and success of our project.
### Software Resources
For enthusiasts and researchers inclined towards a deeper probe, our model's training and evaluation code is transparently accessible.
Navigate to our GitHub Repository [ECMWFCode4Earth/DeepR](https://github.com/ECMWFCode4Earth/DeepR) under the ECWMF Code 4 Earth consortium.
### Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
- Mario Santa Cruz. Predictia Intelligent Data Solutions S.L.
- Antonio Pérez. Predictia Intelligent Data Solutions S.L.
- Javier Díez. Instituto de Física de Cantabria (IFCA)
|
lifenghan/TransformerNMT-zh2en
|
lifenghan
| 2023-10-30T14:27:04Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-2.0",
"region:us"
] | null | 2023-10-30T14:15:50Z |
---
license: cc-by-nc-sa-2.0
---
Transformer for NMT trained from scratch for Chinese-to-English. Trained models are hosted here. These models are published in the PhD thesis:
Han, Lifeng (2022) An investigation into multi-word expressions in machine translation. PhD thesis, Dublin City University. https://doras.dcu.ie/26559/
More publication lists on this work are available at:
https://doras.dcu.ie/view/people/Han=3ALifeng=3A=3A.html
data and trained models to tidy up:
[here](https://drive.google.com/drive/folders/0BygVShQKVPZDfjh3ekw5WTJiclJGME4zWU9BbTNWZl9uank0NmZabk1LWjRrTUN4RFlDaDg?resourcekey=0-zjrjG9p5aSCTcH2-hfWw9g&usp=sharing)
|
jin5605/use_data_finetuning
|
jin5605
| 2023-10-30T14:10:36Z | 219 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-30T08:26:33Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Hafiz47/food_classifier
|
Hafiz47
| 2023-10-30T14:06:38Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-30T13:34:02Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Hafiz47/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hafiz47/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3692
- Validation Loss: 0.3328
- Train Accuracy: 0.926
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7777 | 1.6234 | 0.834 | 0 |
| 1.1884 | 0.7782 | 0.911 | 1 |
| 0.6717 | 0.5104 | 0.908 | 2 |
| 0.4754 | 0.4022 | 0.914 | 3 |
| 0.3692 | 0.3328 | 0.926 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
titanpark/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab
|
titanpark
| 2023-10-30T14:06:12Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"region:us"
] | null | 2023-10-30T14:05:58Z |
---
library_name: peft
base_model: bigcode/starcoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
aubmindlab/aragpt2-medium
|
aubmindlab
| 2023-10-30T13:53:45Z | 3,879 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"ar",
"arxiv:2012.15520",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- wikipedia
- Osian
- 1.5B-Arabic-Corpus
- oscar-arabic-unshuffled
- Assafir(private)
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/>
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
# pip install arabert
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aubmindlab/aragpt2-medium'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decoding settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article separated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \\\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\\n --config_file="config/small_hparams.json" \\\n --batch_size=128 \\\n --eval_batch_size=8 \\\n --num_train_steps= \\\n --num_warmup_steps= \\\n --learning_rate= \\\n --save_checkpoints_steps= \\\n --max_seq_length=1024 \\\n --max_eval_steps= \\\n --optimizer="lamb" \\\n --iterations_per_loop=5000 \\\n --keep_checkpoint_max=10 \\\n --use_tpu=True \\\n --tpu_name=<TPU NAME> \\\n --do_train=True \\\n --do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 80 | 1M | 15
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
|
aubmindlab/aragpt2-base
|
aubmindlab
| 2023-10-30T13:53:25Z | 9,777 | 25 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"ar",
"arxiv:2012.15520",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- wikipedia
- Osian
- 1.5B-Arabic-Corpus
- oscar-arabic-unshuffled
- Assafir(private)
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/>
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
# pip install arabert
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aubmindlab/aragpt2-base'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decoding settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article separated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
|
delitante-coder/falcon_tune
|
delitante-coder
| 2023-10-30T13:28:41Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2023-10-28T13:35:32Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
johannes-garstenauer/distilbert_masking_heaps
|
johannes-garstenauer
| 2023-10-30T13:27:33Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-15T09:11:59Z |
DistilBERT for masked language modelling trained on OpenSSH heap data structures dataset for the purpose of generating representations.
This model was created for the thesis "Generating Robust Representations of Structures in OpenSSH Heap Dumps" by Johannes Garstenauer.
### Model Description
- **Developed by:** Johannes Garstenauer
- **Funded by [optional]:** Universität Passau
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://zenodo.org/records/10053730
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_train
Validation data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_eval
|
johannes-garstenauer/distilbert_class_heaps
|
johannes-garstenauer
| 2023-10-30T13:27:21Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"digital forensics",
"dataset:johannes-garstenauer/structs_token_size_4_reduced_labelled_eval",
"dataset:johannes-garstenauer/structs_token_size_4_reduced_labelled_train",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-18T13:59:04Z |
---
datasets:
- johannes-garstenauer/structs_token_size_4_reduced_labelled_eval
- johannes-garstenauer/structs_token_size_4_reduced_labelled_train
tags:
- digital forensics
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
DistilBERT for sequence classification trained on OpenSSH heap data structures dataset for the purpose of generating representations.
This model was created for the thesis "Generating Robust Representations of Structures in OpenSSH Heap Dumps" by Johannes Garstenauer.
It is finetuned from "johannes-garstenauer/distilbert_masking_heaps".
### Model Description
- **Developed by:** Johannes Garstenauer
- **Funded by [optional]:** Universität Passau
- **Finetuned from model [optional]:** johannes-garstenauer/distilbert_masking_heaps
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://zenodo.org/records/10053730
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_train
Validation data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_eval
|
nitendra1729/bert-base-uncased-disaster_tweetsv1
|
nitendra1729
| 2023-10-30T13:15:38Z | 65 | 1 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:37:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-uncased-disaster_tweetsv1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-disaster_tweetsv1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on disaster tweets dataset of kaggle.
It achieves the following results on the evaluation set: 81% accuracy
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 570, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
goodjin/furniture_use_data_finetuning
|
goodjin
| 2023-10-30T13:15:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-30T08:57:46Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: furniture_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furniture_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
WillyWilliam/distilbert-emotion-analysis
|
WillyWilliam
| 2023-10-30T13:08:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T13:05:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilbert-emotion-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1449
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.1609 | 0.935 |
| No log | 2.0 | 250 | 0.1449 | 0.935 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
RogerB/afro-xlmr-large-kinteal-domain-kinte-task-unkin-sent3
|
RogerB
| 2023-10-30T13:06:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/afro-xlmr-large-kinteal-domain-kinte-task",
"base_model:finetune:RogerB/afro-xlmr-large-kinteal-domain-kinte-task",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-30T11:25:18Z |
---
license: mit
base_model: RogerB/afro-xlmr-large-kinteal-domain-kinte-task
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: afro-xlmr-large-kinteal-domain-kinte-task-unkin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinteal-domain-kinte-task-unkin-sent3
This model is a fine-tuned version of [RogerB/afro-xlmr-large-kinteal-domain-kinte-task](https://huggingface.co/RogerB/afro-xlmr-large-kinteal-domain-kinte-task) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9247
- F1: 0.6910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9089 | 1.0 | 1013 | 0.6264 | 0.7476 |
| 0.7196 | 2.0 | 2026 | 0.5055 | 0.8130 |
| 0.6028 | 3.0 | 3039 | 0.5010 | 0.8326 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
brit2738/llama2-13b-peft-ACL
|
brit2738
| 2023-10-30T13:04:48Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-10-30T10:36:20Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
HerbertAIHug/NLP_Capstone
|
HerbertAIHug
| 2023-10-30T13:03:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:huawei-noah/TinyBERT_General_4L_312D",
"base_model:finetune:huawei-noah/TinyBERT_General_4L_312D",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-26T13:03:45Z |
---
base_model: huawei-noah/TinyBERT_General_4L_312D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLP_Capstone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Capstone
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3176
- Accuracy: 0.8671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5286 | 0.2 | 500 | 0.4169 | 0.8251 |
| 0.4299 | 0.4 | 1000 | 0.4137 | 0.8332 |
| 0.3856 | 0.6 | 1500 | 0.3714 | 0.8512 |
| 0.3692 | 0.8 | 2000 | 0.3176 | 0.8671 |
| 0.3604 | 1.0 | 2500 | 0.3869 | 0.8635 |
| 0.3457 | 1.2 | 3000 | 0.4126 | 0.8631 |
| 0.3291 | 1.41 | 3500 | 0.4272 | 0.8675 |
| 0.3481 | 1.61 | 4000 | 0.3754 | 0.8775 |
| 0.3253 | 1.81 | 4500 | 0.4293 | 0.8649 |
| 0.3306 | 2.01 | 5000 | 0.3807 | 0.8789 |
| 0.2849 | 2.21 | 5500 | 0.4291 | 0.8803 |
| 0.2824 | 2.41 | 6000 | 0.4058 | 0.8797 |
| 0.279 | 2.61 | 6500 | 0.4521 | 0.8761 |
| 0.2944 | 2.81 | 7000 | 0.4986 | 0.8747 |
| 0.3347 | 3.01 | 7500 | 0.4364 | 0.8815 |
| 0.2622 | 3.21 | 8000 | 0.5368 | 0.8703 |
| 0.2494 | 3.41 | 8500 | 0.4795 | 0.8854 |
| 0.2645 | 3.61 | 9000 | 0.4795 | 0.8864 |
| 0.243 | 3.81 | 9500 | 0.4570 | 0.8874 |
| 0.2399 | 4.01 | 10000 | 0.5219 | 0.8795 |
| 0.2103 | 4.22 | 10500 | 0.5325 | 0.8775 |
| 0.2196 | 4.42 | 11000 | 0.5629 | 0.8729 |
| 0.2494 | 4.62 | 11500 | 0.5087 | 0.8826 |
| 0.1968 | 4.82 | 12000 | 0.5332 | 0.8779 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
rznas/sapace-invader
|
rznas
| 2023-10-30T12:59:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-30T12:59:19Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 79.00 +/- 23.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rznas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rznas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rznas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
OhST/cppe5_use_data_finetuning
|
OhST
| 2023-10-30T12:55:48Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-30T08:23:45Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: cppe5_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cppe5_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Nondzu/Mistral-7B-code-16k-qlora
|
Nondzu
| 2023-10-30T12:45:22Z | 1,529 | 26 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-16T15:19:21Z |
---
license: apache-2.0
---
# Mistral-7B-code-16k-qlora
I'm excited to announce the release of a new model called Mistral-7B-code-16k-qlora. This small and fast model shows a lot of promise for supporting coding or acting as a copilot. I'm currently looking for people to help me test it out!
## Additional Information
This model was trained on 3x RTX 3090 in my homelab, using around 65kWh for approximately 23 cents, which is equivalent to around $15 for electricity.
## Quantised:
1. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ
2. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-AWQ
3. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GGUF
## Download by qBittorrent:
#### Torrent file: https://github.com/Nondzu/LlamaTor/blob/torrents/torrents/Nondzu_Mistral-7B-code-16k-qlora.torrent
## Dataset:
nickrosh/Evol-Instruct-Code-80k-v1
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## eval plus
Human eval plus: https://github.com/evalplus/evalplus
```
Nondzu mistral-7b-code
Base
{'pass@1': 0.3353658536585366}
Base + Extra
{'pass@1': 0.2804878048780488}
```
to compare here is original Mistral model tested on the same machine
```
Mistral 7b
Base
{'pass@1': 0.2926829268292683}
Base + Extra
{'pass@1': 0.24390243902439024}
```
## Settings:
```
base_model: mistralai/Mistral-7B-Instruct-v0.1
base_model_config: mistralai/Mistral-7B-Instruct-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: nickrosh/Evol-Instruct-Code-80k-v1
type: oasst
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./Mistral-7B-Evol-Instruct-16k-test11
adapter: qlora
lora_model_dir:
# 16384 8192 4096 2048
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: mistral-code
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 8
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
# deepspeed:
deepspeed: deepspeed/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```

Check my other projects:
https://github.com/Nondzu/LlamaTor
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.