modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
antphb/DS-Chatbox-mbart-large-50
|
antphb
| 2023-06-17T11:03:57Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T07:09:44Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-mbart-large-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-mbart-large-50
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.977 | 0.19 | 100 | 0.0001 |
| 0.023 | 0.38 | 200 | 0.0002 |
| 0.0005 | 0.57 | 300 | 0.0005 |
| 0.0007 | 0.76 | 400 | 0.0006 |
| 0.0012 | 0.95 | 500 | 0.0014 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
nomad-ai/ppo-LunarLander-v2-1
|
nomad-ai
| 2023-06-17T11:01:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:00:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.34 +/- 18.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
octipuw/pixelcopter
|
octipuw
| 2023-06-17T10:54:58Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T10:45:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 47.30 +/- 54.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ditrip/ppo-LunarLander-v2
|
Ditrip
| 2023-06-17T10:09:42Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-03T15:16:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.14 +/- 12.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edu-linguistic/opt-1.3b-edu-sft
|
edu-linguistic
| 2023-06-17T09:28:57Z | 0 | 0 | null |
[
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:Nebulous/gpt4all_pruned",
"region:us"
] | null | 2023-06-15T14:16:11Z |
---
datasets:
- yahma/alpaca-cleaned
- Nebulous/gpt4all_pruned
language:
- en
---
## Inference Example:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "edu-linguistic/opt-1.3b-edu-sft"
model_name = 'facebook/opt-1.3b'
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(model_name)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "<|prompter|> Consider the following function: f(x1, x2) = ln(x1). This function is…"
question = tokenizer.encode(question, return_tensors='pt')
generation_kwargs = {
"do_sample": True,
"top_k": 0,
"top_p": 0.9,
"bos_token_id": tokenizer.bos_token_id,
"pad_token_id": tokenizer.pad_token_id,
"eos_token_id": tokenizer.eos_token_id,
"num_return_sequences": 1,
"min_new_tokens": 10,
"max_new_tokens": 512,
}
response = model.generate(input_ids=question, **generation_kwargs)
response = tokenizer.decode(response[0],
skip_special_tokens=False,
clean_up_tokenization_spaces=False
)
print(response)
```
|
coyude/Nous-Hermes-13b-Chinese-GGML
|
coyude
| 2023-06-17T09:28:23Z | 0 | 22 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-11T03:42:04Z |
---
license: apache-2.0
language:
- zh
- en
library_name: transformers
pipeline_tag: text-generation
---
原始模型:https://huggingface.co/NousResearch/Nous-Hermes-13b
lora:https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
将Nous-Hermes-13b与chinese-alpaca-lora-13b进行合并,增强模型的中文能力,~~不过存在翻译腔~~
使用项目:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/ggerganov/llama.cpp
**推荐q5_k_m或q4_k_m 该仓库模型均为ggmlv3模型**
Text-generation-webui懒人包:
https://www.bilibili.com/read/cv23495183
---------------------------------------------------------------------------------------------
Original model: https://huggingface.co/NousResearch/Nous-Hermes-13b
Lora: https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
The Nous-Hermes-13b model is merged with the chinese-alpaca-lora-13b model to enhance the Chinese language capability of the model, ~~although it may exhibit a translation style.~~
Usage projects:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/ggerganov/llama.cpp
**q5_k_m or q4_k_m is recommended. All models in this repository are ggmlv3 models.**
|
benol/Roma_Pyatifan
|
benol
| 2023-06-17T09:10:31Z | 0 | 0 | null |
[
"ru",
"en",
"arxiv:1910.09700",
"license:unknown",
"region:us"
] | null | 2023-06-17T08:58:04Z |
---
license: unknown
language:
- ru
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PabloQuant29/ppo-LunarLander-v2
|
PabloQuant29
| 2023-06-17T08:36:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T08:35:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.46 +/- 18.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mustika/alan2
|
mustika
| 2023-06-17T08:36:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T08:34:12Z |
---
license: creativeml-openrail-m
---
|
TheBloke/robin-65B-v2-GGML
|
TheBloke
| 2023-06-17T08:01:48Z | 0 | 17 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T21:59:56Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 65B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 65B v2](https://huggingface.co/OptimalScale/robin-65b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-65b-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.45 GB | 29.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.65 GB | 37.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.50 GB | 34.00 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.16 GB | 30.66 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | Original llama.cpp quant method, 4-bit. |
| robin-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.35 GB | 41.85 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.80 GB | 39.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-65b.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### q6_K and q8_0 files require expansion from archive
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, it is just storing the .bin file in two parts.
### q6_K
Please download:
* `robin-65b.ggmlv3.q6_K.zip`
* `robin-65b.ggmlv3.q6_K.z01`
### q8_0
Please download:
* `robin-65b.ggmlv3.q8_0.zip`
* `robin-65b.ggmlv3.q8_0.z01`
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
```
sudo apt update -y && sudo apt install 7zip
7zz x robin-65b.ggmlv3.q6_K.zip`
```
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-65b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions\n###Human: write a story about llamas\n###Assistant:
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 65B v2
No model card provided in source repository.
|
musabg/mt5-xl-tr-summarization
|
musabg
| 2023-06-17T07:25:20Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"tr",
"dataset:musabg/wikipedia-tr-summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T16:24:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- musabg/wikipedia-tr-summarization
metrics:
- rouge
model-index:
- name: mt5-xl-tr-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: musabg/wikipedia-tr-summarization
type: musabg/wikipedia-tr-summarization
split: validation
metrics:
- name: Rouge1
type: rouge
value: 56.4468
language:
- tr
---
# mT5-Xl Turkish Summarization
This model is a fine-tuned version of [google/mt5-xl](https://huggingface.co/google/mt5-xl) on the musabg/wikipedia-tr-summarization dataset.
This can be used with HF summarization pipeline.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Eval results
It achieves the following results on the evaluation set:
- Loss: 0.5676
- Rouge1: 56.4468
- Rouge2: 41.3258
- Rougel: 48.1909
- Rougelsum: 48.4284
- Gen Len: 75.9265
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Irgendsoeine/FaceTheVote
|
Irgendsoeine
| 2023-06-17T07:09:49Z | 0 | 0 | null |
[
"image-classification",
"region:us"
] |
image-classification
| 2023-06-14T17:05:43Z |
---
pipeline_tag: image-classification
---
|
kjiwon1222/my_awesome_eli5_clm-model
|
kjiwon1222
| 2023-06-17T06:54:34Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T06:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8621 | 1.0 | 1137 | 3.7690 |
| 3.7782 | 2.0 | 2274 | 3.7533 |
| 3.7245 | 3.0 | 3411 | 3.7506 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
aga3134/poca-SoccerTwos
|
aga3134
| 2023-06-17T06:48:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-17T06:48:14Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aga3134/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Arindam75/Reinforce-pixelcopter-v1
|
Arindam75
| 2023-06-17T06:22:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T06:21:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.90 +/- 13.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
eason0203/Reinforce-pixelcopter
|
eason0203
| 2023-06-17T04:49:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T04:49:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 7.10 +/- 8.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mehnaazasad/bart-large-finetuned-arxiv-co-ga-old
|
mehnaazasad
| 2023-06-17T03:05:43Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"en",
"dataset:mehnaazasad/arxiv_astro_co_ga",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T23:10:50Z |
---
license: mit
datasets:
- mehnaazasad/arxiv_astro_co_ga
language:
- en
---
<span style="color:indianred">Status: As of June 8th, 2023 this model has been archived.</span>
For most recent version, please visit https://huggingface.co/mehnaazasad/bart-large-finetuned-arxiv-co-ga-latest
|
UofA-LINGO/text-to-triplets-explanation-v3
|
UofA-LINGO
| 2023-06-17T02:41:53Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-17T02:38:58Z |
---
license: mit
---
LoRA weights for [`lmsys/vicuna-7b-delta-v0`](https://huggingface.co/lmsys/vicuna-7b-delta-v0)
Trained on 'taesiri/webnlg-triplets-explanation-v1' for 4 epochs.
Command:
```
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model='./checkpoints/lmsys-vicuna-7B-HF' --data_path 'taesiri/webnlg-triplets-explanation-v1' --num_epochs=4 --cutoff_len=512 --group_by_length --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' --lora_r=8 --micro_batch_size=8 --batch_size=32
```
|
nolanaatama/dmnslyrkmtsnybnmstyllr
|
nolanaatama
| 2023-06-17T02:31:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T02:27:58Z |
---
license: creativeml-openrail-m
---
|
Atnafu/amhric_xlmr-small
|
Atnafu
| 2023-06-17T02:23:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-17T02:17:40Z |
---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: amh_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amh_small
This model is a fine-tuned version of [Davlan/afro-xlmr-small](https://huggingface.co/Davlan/afro-xlmr-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
breadlicker45/MuseRift
|
breadlicker45
| 2023-06-17T02:11:29Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:breadlicker45/musenet-encoders-40k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T01:22:00Z |
---
datasets:
- breadlicker45/musenet-encoders-40k
---
|
DreamerGPT/D7b-5-1
|
DreamerGPT
| 2023-06-17T01:38:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T01:20:31Z |
---
license: apache-2.0
---
# D7b-5-1
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
darshan7/Model_xlnet_results
|
darshan7
| 2023-06-17T01:22:18Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"xlnet",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-14T19:04:11Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: darshan7/Model_xlnet_results
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# darshan7/Model_xlnet_results
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0058
- Validation Loss: 0.0110
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 181655, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0392 | 0.0262 | 0 |
| 0.0211 | 0.0185 | 1 |
| 0.0151 | 0.0161 | 2 |
| 0.0110 | 0.0127 | 3 |
| 0.0074 | 0.0110 | 4 |
| 0.0058 | 0.0110 | 5 |
| 0.0058 | 0.0110 | 6 |
| 0.0058 | 0.0110 | 7 |
| 0.0059 | 0.0110 | 8 |
| 0.0058 | 0.0110 | 9 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DreamerGPT/D13b-3-3
|
DreamerGPT
| 2023-06-17T01:21:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T00:58:23Z |
---
license: apache-2.0
---
# D13b-3-3
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
harshseth/distilbert-base-uncased-finetuned-imdb
|
harshseth
| 2023-06-17T01:07:35Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-16T19:14:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6157 | 1.0 | 469 | 2.3844 |
| 2.4501 | 2.0 | 938 | 2.2822 |
| 2.377 | 3.0 | 1407 | 2.2549 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
H4nan/dqn-SpaceInvadersNoFrameskip-v4
|
H4nan
| 2023-06-16T23:54:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-23T18:30:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 537.00 +/- 181.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga H4nan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sam34738/mBERT
|
sam34738
| 2023-06-16T23:44:39Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T20:24:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9812
- Accuracy: 0.6583
- F1: 0.6948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.749 | 1.0 | 2100 | 0.7068 | 0.4994 | 0.0131 |
| 0.7707 | 2.0 | 4200 | 0.9812 | 0.6583 | 0.6948 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheBloke/robin-33B-v2-GGML
|
TheBloke
| 2023-06-16T23:31:16Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:09:39Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 33B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 33B v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-33B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| robin-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-33b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 33B v2
No model card provided in source repository.
|
ghze/Taxi_v3
|
ghze
| 2023-06-16T23:00:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T23:00:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ghze/Taxi
|
ghze
| 2023-06-16T22:59:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T22:59:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
devonho/my_awesome_opus_books_model
|
devonho
| 2023-06-16T22:28:30Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-06T07:28:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-ja
split: test
args: en-ja
metrics:
- name: Bleu
type: bleu
value: 23.8215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Bleu: 23.8215
- Gen Len: 4.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-------:|:---------------:|:-------:|:-------:|
| 0.4468 | 1.0 | 500000 | 0.4585 | 23.9023 | 4.705 |
| 0.4397 | 2.0 | 1000000 | 0.4506 | 23.8215 | 4.6055 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio1.67
|
AustinCarthy
| 2023-06-16T22:26:05Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T19:48:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio1.67
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio1.67
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0226
- Accuracy: 0.9982
- F1: 0.9804
- Precision: 0.9942
- Recall: 0.967
- Roc Auc Score: 0.9834
- Tpr At Fpr 0.01: 0.948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0093 | 1.0 | 25032 | 0.0107 | 0.9976 | 0.9750 | 0.9879 | 0.9624 | 0.9809 | 0.9334 |
| 0.0039 | 2.0 | 50064 | 0.0173 | 0.9973 | 0.9704 | 0.9975 | 0.9448 | 0.9723 | 0.9434 |
| 0.0019 | 3.0 | 75096 | 0.0163 | 0.9979 | 0.9771 | 0.9967 | 0.9582 | 0.9790 | 0.9536 |
| 0.001 | 4.0 | 100128 | 0.0216 | 0.9979 | 0.9773 | 0.9981 | 0.9574 | 0.9787 | 0.9576 |
| 0.0009 | 5.0 | 125160 | 0.0226 | 0.9982 | 0.9804 | 0.9942 | 0.967 | 0.9834 | 0.948 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sam34738/indicbert
|
sam34738
| 2023-06-16T22:03:57Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T21:56:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: indicbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indicbert
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9751
- Accuracy: 0.6689
- F1: 0.6899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7041 | 1.0 | 2100 | 0.7416 | 0.6589 | 0.6710 |
| 0.8083 | 2.0 | 4200 | 0.9751 | 0.6689 | 0.6899 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
maren-hugg/xlm-roberta-base-finetuned-panx-en-custom
|
maren-hugg
| 2023-06-16T21:56:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-12T06:49:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-panx-en-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en-custom
This model is a fine-tuned version of [maren-hugg/xlm-roberta-base-finetuned-panx-en](https://huggingface.co/maren-hugg/xlm-roberta-base-finetuned-panx-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1045
- F1: 0.8782
- Precision: 0.8496
- Recall: 0.9088
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.886597454037411e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.128 | 0.75 | 24 | 0.1087 | 0.8514 | 0.8299 | 0.8740 | 0.9713 |
| 0.074 | 1.5 | 48 | 0.1006 | 0.8637 | 0.8505 | 0.8773 | 0.9750 |
| 0.0506 | 2.25 | 72 | 0.0987 | 0.8728 | 0.8587 | 0.8872 | 0.9749 |
| 0.0393 | 3.0 | 96 | 0.1045 | 0.8782 | 0.8496 | 0.9088 | 0.9754 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Enterprize1/ppo-LunarLander-v2
|
Enterprize1
| 2023-06-16T21:45:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T21:45:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.78 +/- 66.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stanford-crfm/music-small-ar-800k
|
stanford-crfm
| 2023-06-16T21:28:12Z | 183 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:01:12Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-ar-100k
|
stanford-crfm
| 2023-06-16T21:27:39Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:58:03Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-800k
|
stanford-crfm
| 2023-06-16T21:25:52Z | 572 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:17:20Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-100k
|
stanford-crfm
| 2023-06-16T21:24:54Z | 176 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:08:04Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-large-100k
|
stanford-crfm
| 2023-06-16T21:24:11Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:22:37Z |
---
license: apache-2.0
---
This is a Large (780M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
jondurbin/airoboros-65b-gpt4-1.2-peft
|
jondurbin
| 2023-06-16T21:01:26Z | 0 | 0 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"region:us"
] | null | 2023-06-14T09:11:36Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
peft weights of https://hugginface.co/jondurbin/airoboros-65b-gpt4-1.2, see that card for details
|
crlandsc/bsrnn-vocals
|
crlandsc
| 2023-06-16T20:25:39Z | 0 | 2 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"vocals",
"region:us"
] | null | 2023-06-16T20:18:04Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- vocals
---
# Model Card for bsrnn-vocals
Vocals model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
crlandsc/bsrnn-bass
|
crlandsc
| 2023-06-16T20:24:33Z | 0 | 1 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"bass",
"region:us"
] | null | 2023-06-16T20:16:53Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- bass
---
# Model Card for bsrnn-bass
Bass model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
GEMCorp/q-Taxi-v3
|
GEMCorp
| 2023-06-16T20:19:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T20:08:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GEMCorp/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sngsfydy/resnet-50-finetuned-eurosat
|
sngsfydy
| 2023-06-16T20:17:05Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T19:14:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0706
- Accuracy: 0.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6069 | 0.99 | 20 | 1.5839 | 0.3879 |
| 1.5395 | 1.98 | 40 | 1.4860 | 0.5485 |
| 1.4321 | 2.96 | 60 | 1.3500 | 0.5364 |
| 1.3292 | 4.0 | 81 | 1.1826 | 0.5212 |
| 1.233 | 4.99 | 101 | 1.0706 | 0.5152 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sheshenin/shshnnphoto
|
sheshenin
| 2023-06-16T20:14:44Z | 41 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-16T20:09:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Shshnnphoto Dreambooth model trained by sheshenin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
TheBloke/robin-13B-v2-GGML
|
TheBloke
| 2023-06-16T20:13:21Z | 0 | 6 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:59:47Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 13B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 13B v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-13B-v2-fp16)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| robin-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 13B v2
No model card provided in source repository.
|
FALLENSTAR/MitsubishiChariotLoRa
|
FALLENSTAR
| 2023-06-16T20:10:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-09T22:52:05Z |
### Model Description
That LoRa based on Mitsubishi Chariot/Chariot grandis 1997-2003. It's also a test model that poorly configured, so you have to play with the settings...
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1

















|
FALLENSTAR/TurbofansLoRa
|
FALLENSTAR
| 2023-06-16T20:09:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T01:32:23Z |
### Model Description
This LoRa is based on Turbofan or Aero Covers, an invention from Japan. Turbofan were created to effectively cool the brake discs. Originally they were used in motorsports, and were made out of aluminum.
Now, thanks to new brake technology, Turbofans are not used for their original purpose. And they are not popular in professional motorsports.
But, to me, they add a futuristic style to car tuning.
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1




|
TheBloke/robin-33B-v2-fp16
|
TheBloke
| 2023-06-16T20:07:31Z | 1,566 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-16T18:09:39Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 33B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 33B v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-33B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 33B v2
No model card provided in source repository.
|
magorshunov/layoutlm-document-qa
|
magorshunov
| 2023-06-16T20:05:35Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"layoutlm",
"document-question-answering",
"pdf",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-06-16T19:44:49Z |
---
language: en
license: mit
pipeline_tag: document-question-answering
tags:
- layoutlm
- document-question-answering
- pdf
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# LayoutLM for Visual Question Answering
This is a fine-tuned version of the multi-modal [LayoutLM](https://aka.ms/layoutlm) model for the task of question answering on documents. It has been fine-tuned using both the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) and [DocVQA](https://www.docvqa.org/) datasets.
## Getting started with the model
To run these examples, you must have [PIL](https://pillow.readthedocs.io/en/stable/installation.html), [pytesseract](https://pypi.org/project/pytesseract/), and [PyTorch](https://pytorch.org/get-started/locally/) installed in addition to [transformers](https://huggingface.co/docs/transformers/index).
```python
from transformers import pipeline
nlp = pipeline(
"document-question-answering",
model="impira/layoutlm-document-qa",
)
nlp(
"https://templates.invoicehome.com/invoice-template-us-neat-750px.png",
"What is the invoice number?"
)
# {'score': 0.9943977, 'answer': 'us-001', 'start': 15, 'end': 15}
nlp(
"https://miro.medium.com/max/787/1*iECQRIiOGTmEFLdWkVIH2g.jpeg",
"What is the purchase amount?"
)
# {'score': 0.9912159, 'answer': '$1,000,000,000', 'start': 97, 'end': 97}
nlp(
"https://www.accountingcoach.com/wp-content/uploads/2013/10/income-statement-example@2x.png",
"What are the 2020 net sales?"
)
# {'score': 0.59147286, 'answer': '$ 3,750', 'start': 19, 'end': 20}
```
**NOTE**: This model and pipeline was recently landed in transformers via [PR #18407](https://github.com/huggingface/transformers/pull/18407) and [PR #18414](https://github.com/huggingface/transformers/pull/18414), so you'll need to use a recent version of transformers, for example:
```bash
pip install git+https://github.com/huggingface/transformers.git@2ef774211733f0acf8d3415f9284c49ef219e991
```
## About us
This model was created by the team at [Impira](https://www.impira.com/).
|
TheBloke/robin-7B-v2-GGML
|
TheBloke
| 2023-06-16T20:04:09Z | 0 | 8 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:28:00Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| robin-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 7B v2
No model card provided in source repository.
|
apopam/Taxi-v3
|
apopam
| 2023-06-16T19:55:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:55:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="apopam/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SSSSSSSSSSSJJJJJJJJJJJJJ/my_awesome_eli5_clm-model
|
SSSSSSSSSSSJJJJJJJJJJJJJ
| 2023-06-16T19:44:19Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:13:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8765 | 1.0 | 1120 | 3.7555 |
| 3.7769 | 2.0 | 2240 | 3.7368 |
| 3.7331 | 3.0 | 3360 | 3.7341 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
CodyKilpatrick/Reinforce-Pixelcopter-PLE-v0
|
CodyKilpatrick
| 2023-06-16T19:43:03Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-12T15:12:47Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 98.70 +/- 89.31
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Irgendsoeine/FaceTheVotev2
|
Irgendsoeine
| 2023-06-16T19:40:56Z | 4 | 0 |
transformers
|
[
"transformers",
"mobilenet",
"image-classification",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T15:39:15Z |
---
pipeline_tag: image-classification
---
|
Atnafu/amharic_xlmr_large
|
Atnafu
| 2023-06-16T19:32:00Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-16T19:17:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: amh_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amh_large
It achieves the following results on the evaluation set:
- Loss: 3.9153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/tiny-audio-diffusion-snares
|
crlandsc
| 2023-06-16T19:25:10Z | 3 | 1 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-10T15:20:00Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-snares
Snare drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate snare drum samples.
|
renatosramiro/ppo-LunarLander-v2
|
renatosramiro
| 2023-06-16T19:13:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:12:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.95 +/- 35.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kenhoffman/distilbert-base-uncased-finetuned-emotion-2
|
kenhoffman
| 2023-06-16T18:59:13Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T16:41:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259570934810228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2205
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8469 | 1.0 | 250 | 0.3100 | 0.91 | 0.9078 |
| 0.2483 | 2.0 | 500 | 0.2205 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
digiplay/majicMIX_realistic_v5preview
|
digiplay
| 2023-06-16T18:49:48Z | 397 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T13:09:24Z |
---
license: other
---
Very famous realistic beauty Model
Model info :
https://civitai.com/models/43331?modelVersionId=79068
Orginal Author's DEMO image :

|
Fred01/ppo-LunarLander-v2
|
Fred01
| 2023-06-16T18:27:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T18:26:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.34 +/- 29.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ugiugi/inisw08-RoBERT-mlm-adamw_torch_bs8
|
ugiugi
| 2023-06-16T18:01:51Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-16T03:23:35Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: inisw08-RoBERT-mlm-adamw_torch_bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inisw08-RoBERT-mlm-adamw_torch_bs8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4931
- Accuracy: 0.3551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/add_BERT_24_mnli
|
gokuls
| 2023-06-16T17:59:45Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T12:10:28Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: add_BERT_24_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3295362082994304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_24_mnli
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0986
- Accuracy: 0.3295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1032 | 1.0 | 3068 | 1.0994 | 0.3182 |
| 1.0988 | 2.0 | 6136 | 1.0986 | 0.3182 |
| 1.0987 | 3.0 | 9204 | 1.0987 | 0.3274 |
| 1.0988 | 4.0 | 12272 | 1.0986 | 0.3274 |
| 1.0987 | 5.0 | 15340 | 1.0986 | 0.3274 |
| 1.0986 | 6.0 | 18408 | 1.0986 | 0.3182 |
| 1.0986 | 7.0 | 21476 | 1.0986 | 0.3182 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3182 |
| 1.0986 | 9.0 | 27612 | 1.0986 | 0.3274 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Trisert/outputs
|
Trisert
| 2023-06-16T17:43:33Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T17:42:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [togethercomputer/RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
anirban-1009/tomato
|
anirban-1009
| 2023-06-16T17:34:28Z | 0 | 0 |
keras
|
[
"keras",
"en",
"dataset:rotten_tomatoes",
"license:agpl-3.0",
"region:us"
] | null | 2023-06-16T17:33:01Z |
---
license: agpl-3.0
datasets:
- rotten_tomatoes
language:
- en
metrics:
- accuracy
library_name: keras
---
|
Ahatsham/flan-t5-small-imdb-text-classification
|
Ahatsham
| 2023-06-16T17:29:33Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-16T14:54:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/chara_GrandBlue_KotegawaNanaka_v1
|
LarryAIDraw
| 2023-06-16T17:24:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:17:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/90866/kotegawa-nanaka-or-grand-blue
|
LarryAIDraw/chara_GrandBlue_KotegawaChisa_v1
|
LarryAIDraw
| 2023-06-16T17:24:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:16:42Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/90910/kotegawa-chisa-or-grand-blue
|
LarryAIDraw/Girls_Frontline-M1887_With_multires_noise_version_
|
LarryAIDraw
| 2023-06-16T17:23:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:15:16Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/89093/girls-frontline-m1887-with-multires-noise-version
|
LarryAIDraw/jingliu100
|
LarryAIDraw
| 2023-06-16T17:23:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:14:17Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/89766/jingliu-or-honkai-star-rail
|
LarryAIDraw/ganbaredouki-chan_douki-chan-11
|
LarryAIDraw
| 2023-06-16T17:20:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:11:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/89745/douki-chan-or-do-your-best-doki-chan
|
sd-dreambooth-library/BaysaLaban123
|
sd-dreambooth-library
| 2023-06-16T17:15:24Z | 24 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-16T17:13:19Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: LabanBaysa1
---
### Labanbaysa11 Dreambooth model trained by LabanAsmar with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
LabanBaysa1 (use that on your prompt)

|
michaelfeil/ct2fast-falcon-7b-sft-top1-696
|
michaelfeil
| 2023-06-16T17:08:35Z | 7 | 3 |
transformers
|
[
"transformers",
"ctranslate2",
"int8",
"float16",
"sft",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T20:13:56Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- ctranslate2
- int8
- float16
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
library_name: transformers
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [OpenAssistant/falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
```bash
pip install hf-hub-ctranslate2>=2.10.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-falcon-7b-sft-top1-696"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.10.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-16 using
```
ct2-transformers-converter --model OpenAssistant/falcon-7b-sft-top1-696 --output_dir ~/tmp-ct2fast-falcon-7b-sft-top1-696 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# Open-Assistant Falcon 7B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) LLM.
It was trained with 11,123 top-1 (high-quality) demonstrations of the OASST data set (exported on June 2, 2023) with a batch size of 128 for 8 epochs with LIMA style dropout (p=0.2) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/25apbcld) (Checkpoint: 696 steps)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-top1-696_sampling_noprefix2.json)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Sample Code
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "OpenAssistant/falcon-7b-sft-top1-696"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"
sequences = pipeline(
input_text,
max_length=500,
do_sample=True,
return_full_text=False,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Configuration Details
Model:
```
falcon-7b:
dtype: bf16
log_dir: "falcon_log_7b"
learning_rate: 1e-5
model_name: "tiiuae/falcon-7b"
deepspeed_config: configs/zero_config.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
save_strategy: steps
eval_steps: 80
save_steps: 80
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 4
per_device_train_batch_size: 4
per_device_eval_batch_size: 8
num_train_epochs: 8
save_total_limit: 4
residual_dropout: 0.2
residual_dropout_lima: true
```
Dataset:
```
oasst-top1:
# oasst_export: 11123 (100.00%)
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 1
```
Train command:
```
deepspeed trainer_sft.py --configs defaults falcon-7b oasst-top1 --cache_dir <data_cache_dir> --output_dir <output_path> --deepspeed
```
Export command:
```
python export_model.py --dtype bf16 --hf_repo_name OpenAssistant/falcon-7b-sft-top1 --trust_remote_code --auth_token <auth_token> <output_path> --max_shard_size 2GB
```
|
solihun22/atina
|
solihun22
| 2023-06-16T16:57:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T16:53:59Z |
---
license: creativeml-openrail-m
---
|
deepgoyal19/lora_tb
|
deepgoyal19
| 2023-06-16T16:46:16Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-15T20:12:19Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/lora_tb
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
Brendar/MaBePa_STS
|
Brendar
| 2023-06-16T16:02:40Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"code",
"fill-mask",
"es",
"dataset:xnli",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T14:27:17Z |
---
datasets:
- xnli
language:
- es
library_name: transformers
pipeline_tag: fill-mask
tags:
- code
---
Introducción
El objetivo del presente trabajo es generar un modelo para identificar la similitud semántica entre dos oraciones (o Semantic Textual Similarity: “STS”), es decir, medir qué tan parecidos son dos documentos. Dicho modelo será a través de una red neuronal siamesa, que implica usar la misma red, con idénticos parámetros, para procesar la premisa y la hipótesis.
“La tarea STS está motivada por la observación de que modelar con precisión la similitud de significado de las oraciones es un problema fundamental de comprensión del lenguaje relevante para numerosas aplicaciones, incluyendo: traducción automática (MT), resumen, generación, pregunta respuesta (QA), calificación de respuestas cortas, semántica, sistemas de búsqueda, diálogo y conversación.” (Cera Et al, 2017, p. 1).
Datos
El dataset elegido fue XNLI en español. El mismo contiene los campos de 'premise', 'hypothesis' y 'label’, donde los dos primeros campos son oraciones o cadenas de texto mientras que el tercero es la similitud semántica entre ambas con la siguiente codificación: 'entailment': 0, 'neutral': 1, 'contradiction': 2
El mismo está compuesto por tres dataset:
TRAINING, con 392.702 datos;
TEST, con 5.010 datos;
VALIDATION, con 2.490 datos.
Además se utiliza un vocabulario en español que contiene alrededor de 31.000 palabras, incluyendo los siguientes caracteres especiales: "[MASK]", "[PAD]", "[EOS]","[UNK]","[CLS]","[SEP]" que se encuentran en las primeras posiciones del vocabulario.
Dicho vocabulario surge del modelo de Huggigface cuyo model_name es "dccuchile/bert-base-spanish-wwm-uncased".
Método
Tokenización
En primer lugar importamos AutoTokenizer y obtenemos el tokenizador del modelo definido anteriormente. El mismo, al tokenizar adicionalmente de convertir los tokens o palabras en su ID del vocabulario, le incorpora al inicio el id del carácter especial “CLS” y al final el “SEP”.
Además fijamos como parámetro la longitud máxima del modelo (tokenizer.model_max_length) esto genera que corte las premisas y las hipotesis si son mas largas y que las complete con padding si son mas cortas hasta completar la longitud deseada (con “PAD”)..
Notamos que este tokenizador contiene funciones como las del itos y el stoi ya generadas.
Procedemos a tokenizar el dataset a utilizando la función map, tanto para la premisa como la hipótesis.
Armado de Batches
Con la tokenizacion realizada procedemos a separar los Batches, para lo cual usamos el dataloader de torch.
El resultado serán batches de tamaño 32 para el dataset de train, y 16 tanto para el de validación como para el de test. Sus dimensiones son el tamaño de cada batch x la cantidad de elementos. En el caso de la premisa y la hipótesis la cantidad de elementos será el largo utilizado para la tokenización mientras que en el caso del label, al ser único, la dimension será del tamaño del batch x 1.
Asimismo incorporamos a los batches el attention_mask de la premisa y de la hipótesis.
Modelo base
BERT es una red pre-entrenada de transformadores (...). El input de BERT consiste en las dos oraciones separadas por un token especial [SEP]. (...) yla salida se pasa a una función de regresión simple para derivar la etiqueta final. (Reimers and Gurevich, 2019, p. 2).
Sobre este modelo base, se realizó el finetuning de nuestra red, basándonos en el siguiente diagrama (Reimers and Gurevich, 2019, p. 3).:
Es decir que lo que haremos será pasar la premisa y la hipótesis por una BERT, obteniendo luego un pooler output para cada una de ellas (“u” y “v”). Luego se concatenan, junto con el módulo de la diferencia, y ese resultado es pasado por la capa lineal obteniendo 3 resultados, que serán las probabilidades asociadas a cada label.
Se procedió a entrenar la red con el dataset de train, utilizando como función de pérdida la entropía cruzada, y luego se procedió a validar el modelo. Los resultados se exponen en la próxima sección.
|
kiamesdavies/dressup_facial_james_dreambooth_lora_6
|
kiamesdavies
| 2023-06-16T15:58:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-16T15:15:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of jamesniranye person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - kiamesdavies/dressup_facial_james_dreambooth_lora_6
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of jamesniranye person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
jncraton/flan-alpaca-base-ct2-int8
|
jncraton
| 2023-06-16T15:57:10Z | 4 | 0 |
transformers
|
[
"transformers",
"dataset:tatsu-lab/alpaca",
"arxiv:2306.04757",
"arxiv:2210.11416",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-16T15:47:20Z |
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
---
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
📣 Curious to know the performance of 🍮 🦙 **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
📣 **FLAN-T5** is also useful in text-to-audio generation. Find our work at [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) if you are interested.
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
We have a [live interactive demo](https://huggingface.co/spaces/joaogante/transformers_streaming) thanks to [Joao Gante](https://huggingface.co/joaogante)!
We are also benchmarking many instruction-tuned models at [declare-lab/flan-eval](https://github.com/declare-lab/flan-eval).
Our pretrained models are fully available on HuggingFace 🤗 :
| Model | Parameters | Instruction Data | Training GPUs |
|----------------------------------------------------------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
| [Flan-Alpaca-Base](https://huggingface.co/declare-lab/flan-alpaca-base) | 220M | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-Large](https://huggingface.co/declare-lab/flan-alpaca-large) | 770M | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-XL](https://huggingface.co/declare-lab/flan-alpaca-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-XXL](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 4x A6000 (FSDP) |
| [Flan-GPT4All-XL](https://huggingface.co/declare-lab/flan-gpt4all-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [GPT4All](https://github.com/nomic-ai/gpt4all) | 1x A6000 |
| [Flan-ShareGPT-XL](https://huggingface.co/declare-lab/flan-sharegpt-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [ShareGPT](https://github.com/domeccleston/sharegpt)/[Vicuna](https://github.com/lm-sys/FastChat) | 1x A6000 |
| [Flan-Alpaca-GPT4-XL*](https://huggingface.co/declare-lab/flan-alpaca-gpt4-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [GPT4-Alpaca](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) | 1x A6000 |
*recommended for better performance
### Why?
[Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) represents an exciting new direction
to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily.
Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data.
The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model.
However, the original implementation is less accessible due to licensing constraints of the
underlying [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
Furthermore, users have noted [potential noise](https://github.com/tloen/alpaca-lora/issues/65) in the synthetic
dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but
less diverse) instructions such as [Flan-T5](https://arxiv.org/abs/2210.11416).
### Usage
```
from transformers import pipeline
prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="declare-lab/flan-alpaca-gpt4-xl")
model(prompt, max_length=128, do_sample=True)
# Dear AlpacaFriend,
# My name is Alpaca and I'm 10 years old.
# I'm excited to announce that I'm a big fan of flan!
# We like to eat it as a snack and I believe that it can help with our overall growth.
# I'd love to hear your feedback on this idea.
# Have a great day!
# Best, AL Paca
```
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-16T15:48:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T13:35:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Accuracy: 0.9980
- F1: 0.9791
- Precision: 0.9967
- Recall: 0.9622
- Roc Auc Score: 0.9810
- Tpr At Fpr 0.01: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0082 | 1.0 | 21554 | 0.0150 | 0.9968 | 0.9658 | 0.9964 | 0.937 | 0.9684 | 0.9284 |
| 0.0048 | 2.0 | 43108 | 0.0103 | 0.9979 | 0.9772 | 0.9944 | 0.9606 | 0.9802 | 0.9442 |
| 0.0025 | 3.0 | 64662 | 0.0157 | 0.9980 | 0.9788 | 0.9952 | 0.9628 | 0.9813 | 0.9552 |
| 0.0012 | 4.0 | 86216 | 0.0177 | 0.9979 | 0.9774 | 0.9979 | 0.9578 | 0.9789 | 0.9562 |
| 0.0 | 5.0 | 107770 | 0.0187 | 0.9980 | 0.9791 | 0.9967 | 0.9622 | 0.9810 | 0.96 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
adirasayidina/t5-small-nsbs
|
adirasayidina
| 2023-06-16T15:11:20Z | 71 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-14T08:22:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-nsbs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-nsbs
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 2.2513 |
| No log | 2.0 | 76 | 2.2731 |
| No log | 3.0 | 114 | 2.4256 |
| No log | 4.0 | 152 | 2.5481 |
| No log | 5.0 | 190 | 2.6124 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jennielees/dqn-SpaceInvadersNoFrameskip-v4
|
jennielees
| 2023-06-16T15:07:10Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T15:06:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 554.50 +/- 144.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jennielees -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jennielees -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jennielees
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
antoninobrillante/gtl-elephant-ext
|
antoninobrillante
| 2023-06-16T15:03:26Z | 30 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-16T14:51:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### gtl-elephant-ext Dreambooth model trained by antoninobrillante with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
minhtoan/roberta-masked-lm-vietnamese-nom
|
minhtoan
| 2023-06-16T15:01:53Z | 105 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-24T14:58:33Z |
---
language:
- vi
pipeline_tag: fill-mask
widget:
- text: '<mask> 仍 𠎬 英 䧺 淑 女'
---
# Pre-trained Masked Language Model for Vietnamese Nôm
A masked language model for Nôm script is a specialized version of a language model designed to understand and generate text in the Chữ Nôm script. Chữ Nôm is a logographic writing system used in Vietnam from the 13th to the early 20th century, primarily before the introduction of the Latin-based Vietnamese script.
Similar to other masked language models, such as GPT-3, the Chữ Nôm masked language model is trained on a large corpus of Chữ Nôm texts. This training data helps the model learn the statistical patterns, contextual relationships, and semantic meanings of characters and words in the Chữ Nôm script.
Model was trained on some literary works and poetry: Bai ca ran co bac, Buom hoa tan truyen, Chinh phu ngam, Gia huan ca, Ho Xuan Huong, Luc Van Tien, Tale of Kieu-1870, Tale of Kieu 1871, Tale of kieu 1902,...
# How to use the model
~~~~
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
import torch
# Load the tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained('minhtoan/roberta-masked-lm-vietnamese-nom')
# Load the model
model = RobertaForMaskedLM.from_pretrained('minhtoan/roberta-masked-lm-vietnamese-nom')
text = '<mask>如㗂䳽𠖤戈'
inputs = tokenizer(text, return_tensors="pt")
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
logits = model(**inputs).logits
mask_token_logits = logits[0, mask_token_index, :]
print("Predicted word:", tokenizer.decode(mask_token_logits[0].argmax()))
~~~~
## Author
Phan Minh Toan
|
aga3134/rl_course_vizdoom_health_gathering_supreme
|
aga3134
| 2023-06-16T15:00:12Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T14:59:39Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.85 +/- 5.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r aga3134/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_mnli
|
gokuls
| 2023-06-16T14:54:08Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T06:00:53Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3295362082994304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_mnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0982
- Accuracy: 0.3295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1031 | 1.0 | 3068 | 1.0998 | 0.3274 |
| 1.0989 | 2.0 | 6136 | 1.0987 | 0.3182 |
| 1.0988 | 3.0 | 9204 | 1.0986 | 0.3274 |
| 1.0987 | 4.0 | 12272 | 1.0986 | 0.3182 |
| 1.0987 | 5.0 | 15340 | 1.0986 | 0.3182 |
| 1.0987 | 6.0 | 18408 | 1.0986 | 0.3182 |
| 1.0986 | 7.0 | 21476 | 1.0982 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3274 |
| 1.0986 | 9.0 | 27612 | 1.0986 | 0.3545 |
| 1.0986 | 10.0 | 30680 | 1.0986 | 0.3545 |
| 1.0987 | 11.0 | 33748 | 1.0987 | 0.3182 |
| 1.0986 | 12.0 | 36816 | 1.0986 | 0.3182 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kejolong/mizuki2.0
|
kejolong
| 2023-06-16T14:39:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:37:05Z |
---
license: creativeml-openrail-m
---
|
luischir/bert-base-spanish-wwm-uncased-finetuned-squad
|
luischir
| 2023-06-16T14:30:56Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T22:01:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-squad
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 1.4234 |
| No log | 2.0 | 400 | 1.2396 |
| 1.3232 | 3.0 | 600 | 1.2504 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Xuanlong/MUAD_DeepLabmodel
|
Xuanlong
| 2023-06-16T14:12:58Z | 0 | 0 | null |
[
"arxiv:2203.01437",
"license:afl-3.0",
"region:us"
] | null | 2023-06-16T13:11:46Z |
---
license: afl-3.0
---
## DeepLab v3 plus - ResNet101 model trained on MUAD dataset
This is a DeepLab v3 plus model with ResNet101 backbone trained on the MUAD dataset. The training is based on PyTorch.
MUAD is a synthetic dataset with multiple uncertainties for autonomous driving [[Paper]](https://arxiv.org/abs/2203.01437) [[Website]](https://muad-dataset.github.io/) [[Github]](https://github.com/ENSTA-U2IS/MUAD-Dataset).
### ICCV UNCV 2023 | MUAD challenge
MUAD challenge is now on board on the Codalab platform for uncertainty estimation in semantic segmentation. This challenge is hosted in conjunction with the [ICCV 2023](https://iccv2023.thecvf.com/) workshop, [Uncertainty Quantification for Computer Vision (UNCV)](https://uncv2023.github.io/). Go and have a try! 🚀 🚀 🚀 [[Challenge link]](https://codalab.lisn.upsaclay.fr/competitions/8007)
### Reference
If you find this work useful for your research, please consider citing our paper:
```
@inproceedings{franchi22bmvc,
title = {MUAD: Multiple Uncertainties for Autonomous Driving benchmark for multiple uncertainty types and tasks},
author = {Gianni Franchi and Xuanlong Yu and Andrei Bursuc and Angel Tena and Rémi Kazmierczak and Severine Dubuisson and Emanuel Aldea and David Filliat},
booktitle = {33rd British Machine Vision Conference, {BMVC}},
year = {2022}
}
```
```
@inproceedings{deeplabv3plus2018,
title = {Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author = {Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle = {ECCV},
year = {2018}
}
```
### Copyright
Copyright for MUAD Dataset is owned by Université Paris-Saclay (SATIE Laboratory, Gif-sur-Yvette, FR) and ENSTA Paris (U2IS Laboratory, Palaiseau, FR).
|
KBLab/bart-base-swedish-cased
|
KBLab
| 2023-06-16T14:08:56Z | 129 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"sv",
"arxiv:1910.13461",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language: sv
widget:
- text: "Jag har ätit en <mask>"
---
## KB-BART
A [BART](https://arxiv.org/abs/1910.13461) model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with [Fairseq](https://github.com/pytorch/fairseq), and converted to be compatible with Huggingface.
Training code can be found [here](https://github.com/kb-labb/kb_bart).
## Usage
```python
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("KBLab/bart-base-swedish-cased")
tok = AutoTokenizer.from_pretrained("KBLab/bart-base-swedish-cased")
model.eval()
input_ids = tok.encode(
"Jag har ätit en utsökt <mask> på restaurang vid <mask> .", return_tensors="pt"
)
# Simple greedy search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
num_beams=1,
do_sample=False,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang vid havet på restaurang vid havet.</s>'
# Sampling
output_ids = model.generate(
input_ids,
min_length=15,
max_length=20,
num_beams=1,
do_sample=True,
)
tok.decode(output_ids[0])
#'</s><s> Jag har ätit en utsökt god mat som de tagit in på restaurang vid avröjda</s>'
# Beam search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=True,
num_return_sequences=6
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet. Jag har varit ute och gått en sväng.</s><pad><pad>'
# Diverse beam generation
output_ids = model.generate(
input_ids,
min_length=50,
max_length=100,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=False,
num_return_sequences=6,
num_beam_groups=8,
diversity_penalty=2.0,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang. Jag har varit på restaurang i två dagar... Jag..,..!!!.. Så.. Nu.. Hej.. Vi.. Här.</s>'
```
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)).
|
heack/HeackMT5-ZhCleanText1ML
|
heack
| 2023-06-16T14:05:27Z | 110 | 11 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T09:45:13Z |
---
pipeline_tag: text2text-generation
---
# HeackMT5-ZhCleanText1ML: A Text Cleaning Model for Chinese Texts
This model, `heack/HeackMT5-ZhCleanText1ML`, is a fine-tuned mT5 model for Chinese text cleaning tasks. It is designed to remove gibberish, clean up the text, retain original information as much as possible, and does not process large sections of non-Chinese text (such as English text).
此模块,主要解决困扰中国互联网多年的乱码问题,同时借助Transformer大模型,可以对文字进行提炼(很少的情况下以及
模型非常确信的情况下),进行文字清理。你大可以相信此模型,它不会对你的文本进行任意的改动。对于非中文字符的文本,
本模型不做处理。
此模型基于100万行数据进行训练得到,训练结果:
| step | epoch | learning_rate | loss | eval_loss
|------|-------|------------------------|--------|--------
| 129000 | 3.73 | 1e-05 | 1.714 | 1.706
## Model Details
- Model: mT5
- Language: Chinese (multiple languages supported)
## Usage
Here is how you can use this model for text cleaning:
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
tokenizer = T5Tokenizer.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
text = """
大众汽车集团在第五届中国国际进口博览会携旗下大众汽车品牌、奥灶液弊胀演蹂穷蹭齿港呛奸怀甫磁洒暮烂犁投迪品牌和保时捷品牌亮相,共展出5款纯电动车
型。其中,大众汽车役络观示惑觉髓品牌展出了ID.家族最新成员——ID.AERO概念车,将于2023年上市;奥迪展出了两款豪华运动纯电动车奥迪RS e-tro???Mission GT和首款“Roadjet
陆地专机”奥迪Q5e-t��������Ʒ�2022��ף��µϽron。到2022年底,奥迪将在中国D��������市场提供7款新能源车型。保时捷则展出了两款纯电动车,其中保时捷Mission R概念车为亚洲首秀。保时捷将进一步在电气化领域持续发力,大量创新技
术萤恒扔剪秆仁忙殃掉雄停遵冒姑只脸玉匣有望应用于未来的量产车中,包括全新的电池组和冷����������却系统等。“自2015年以来,中国在智能汽车领域已逐渐在世界上领先。在自动驾驶领域,没有其他国家的技术创新和实施速度现在能够超越中国。”大众汽车集d
团执行副总裁刘云峰说,他指出,中德双方的务实合作广泛而深入,其中经贸合作发挥了压舱石作鑳藉寲杞�鍨嬬殑涓绘垬鍦轰箣涓�銆用,特别是在掏傻汽车行业。大众汽车集团有关人士介绍,大众正积极主动地推进转型,创新求变,oYFb而中国是大众汽车向电动化和交智能化
转型的主战场之一。除了代表大众迄柑居昧懦汽车电动化攻势的多款纯电车型和创新技术外,大众汽车还在本届进博<script会通过互动形式展示了旗下软件公司CARIAD的最新软件研发成果。按计划,在中国,大众汽车品牌ID.家族浴屋??????????????聂日票绢缀郁硼魏挖两
裙快温屎棠虐惨遇的产品阵容将拓展至纯电中型轿车细分市场。
"""
inputs = tokenizer("filter:"+text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=512)
filtered_text = tokenizer.decode(outputs[0], skip_special_tokens=True, num_beams=4, length_penalty=0.8)
print(filtered_text)
======================
"""
大众汽车集团在第五届中国国际进口博览会携旗下大众汽车品牌、奥迪品牌和保时捷品牌亮相,共展出5款纯电动车
型。其中,大众汽车品牌展出了ID.家族最新成员——ID.AERO概念车,将于2023年上市;奥迪展出了两款豪华运动纯电动车奥迪RS e-tronMission GT和首款“Roadjet
陆地专机”奥迪Q5e-tron。到2022年底,奥迪将在中国市场提供7款新能源车型。保时捷则展出了两款纯电动车,其中保时捷Mission R概念车为亚洲首秀。保时捷将进一步在电气化领域持续发力,大量创新技
术有望应用于未来的量产车中,包括全新的电池组和冷却系统等。“自2015年以来,中国在智能汽车领域已逐渐在世界上领先。在自动驾驶领域,没有其他国家的技术创新和实施速度现在能够超越中国。”大众汽车集
团执行副总裁刘云峰说,他指出,中德双方的务实合作广泛而深入,其中经贸合作发挥了压舱石作用,特别是在汽车行业。大众汽车集团有关人士介绍,大众正积极主动地推进转型,创新求变,而中国是大众汽车向电动化和交智能化
转型的主战场之一。除了代表大众汽车电动化攻势的多款纯电车型和创新技术外,大众汽车还在本届进博会通过互动形式展示了旗下软件公司CARIAD的最新软件研发成果。按计划,在中国,大众汽车品牌ID.家族的产品阵容将拓展至纯电中型轿车细分市场。
"""
```
## For long text(more than 512 tokens)
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
def split_text(text, tokenizer, length):
chunks = []
chunk = ""
for char in text:
chunk = chunk + char
if len(tokenizer.encode(chunk, truncation=False)) >= length:
if char in {'.', '。', ',', ',', '\n'}:
chunks.append(chunk)
chunk = ""
else:
for i in range(1, 21):
if chunk[-i] in {'.', '。', ',', ',', '\n'}:
break
else:
i = 0
if i == 0:
chunks.append(chunk)
chunk = ""
else:
chunks.append(chunk[:-i])
chunk = chunk[-i:]
chunks.append(chunk)
assert "".join(chunks) == text
return chunks
def filter_luanma_text(text, model, tokenizer):
chunks = split_text(text, tokenizer,500)
filter_texts = []
for chunk in chunks:
inputs = tokenizer("filter:" + chunk, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=500)
filter_text = tokenizer.decode(outputs[0], max_length=500, skip_special_tokens=True, num_beams=4, length_penalty=0.8)
filter_texts.append(filter_text)
return " ".join(filter_texts)
model = MT5ForConditionalGeneration.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
tokenizer = T5Tokenizer.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
filtered_text = filter_luanma_text("需要df过滤的文=本", model, tokenizer)
print(filtered_text)
======================================
"""
需要过滤的文本
"""
```
## Credits
This model is trained and maintained by KongYang from Shanghai Jiao Tong University. For any questions, please reach out to me at my WeChat ID: kongyang.
## License
This model is released under the CC BY-NC-SA 4.0 license.
## Citation
If you use this model in your research, please cite:
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{kongyang2023heackmt5ZhCleanText1ML,
title={heack/HeackMT5-ZhCleanText1ML: A Large-Scale Multilingual Abstractive Summarization for Chinese Texts},
author={Kong Yang},
year={2023}
}
|
bagassword21/myuta
|
bagassword21
| 2023-06-16T14:03:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:02:42Z |
---
license: creativeml-openrail-m
---
|
busywhistling/WizardCoder-15B-V1.0_safetensors
|
busywhistling
| 2023-06-16T14:01:53Z | 12 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T13:32:32Z |
---
license: bigcode-openrail-m
---
|
claraldk01/my_awesome_qa_model
|
claraldk01
| 2023-06-16T14:01:09Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-16T13:52:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.9985 |
| 2.6102 | 2.0 | 500 | 1.6297 |
| 2.6102 | 3.0 | 750 | 1.5851 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kejolong/mizuki1.0
|
kejolong
| 2023-06-16T13:55:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T13:53:24Z |
---
license: creativeml-openrail-m
---
|
studio-ousia/mluke-base-lite
|
studio-ousia
| 2023-06-16T13:55:05Z | 145 | 2 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"relation classification",
"question answering",
"multilingual",
"ar",
"bn",
"de",
"el",
"en",
"es",
"fi",
"fr",
"hi",
"id",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"sv",
"sw",
"te",
"th",
"tr",
"vi",
"zh",
"arxiv:2010.01057",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-13T10:42:00Z |
---
language:
- multilingual
- ar
- bn
- de
- el
- en
- es
- fi
- fr
- hi
- id
- it
- ja
- ko
- nl
- pl
- pt
- ru
- sv
- sw
- te
- th
- tr
- vi
- zh
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- relation classification
- question answering
license: apache-2.0
---
## mLUKE
**mLUKE** (multilingual LUKE) is a multilingual extension of LUKE.
Please check the [official repository](https://github.com/studio-ousia/luke) for
more details and updates.
This is the mLUKE base model with 12 hidden layers, 768 hidden size. The total number
of parameters in this model is 279M.
The model was initialized with the weights of XLM-RoBERTa(base) and trained using December 2020 version of Wikipedia in 24 languages.
This model is a lite-weight version of [studio-ousia/mluke-base](https://huggingface.co/studio-ousia/mluke-base), without Wikipedia entity embeddings but only with special entities such as `[MASK]`.
## Note
When you load the model from `AutoModel.from_pretrained` with the default configuration, you will see the following warning:
```
Some weights of the model checkpoint at studio-ousia/mluke-base-lite were not used when initializing LukeModel: [
'luke.encoder.layer.0.attention.self.w2e_query.weight', 'luke.encoder.layer.0.attention.self.w2e_query.bias',
'luke.encoder.layer.0.attention.self.e2w_query.weight', 'luke.encoder.layer.0.attention.self.e2w_query.bias',
'luke.encoder.layer.0.attention.self.e2e_query.weight', 'luke.encoder.layer.0.attention.self.e2e_query.bias',
...]
```
These weights are the weights for entity-aware attention (as described in [the LUKE paper](https://arxiv.org/abs/2010.01057)).
This is expected because `use_entity_aware_attention` is set to `false` by default, but the pretrained weights contain the weights for it in case you enable `use_entity_aware_attention` and have the weights loaded into the model.
### Citation
If you find mLUKE useful for your work, please cite the following paper:
```latex
@inproceedings{ri-etal-2022-mluke,
title = "m{LUKE}: {T}he Power of Entity Representations in Multilingual Pretrained Language Models",
author = "Ri, Ryokan and
Yamada, Ikuya and
Tsuruoka, Yoshimasa",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2022",
url = "https://aclanthology.org/2022.acl-long.505",
```
|
vvtq/model_out_4k
|
vvtq
| 2023-06-16T13:32:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-16T08:56:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-vvtq/model_out_4k
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: on a clear dawn/dusk, on the city street, a pedestrian is walking and is obscured

prompt: at daytime, a pedestrian is walking and is obscured

|
TheBloke/airoboros-7B-gpt4-1.2-GGML
|
TheBloke
| 2023-06-16T13:28:10Z | 0 | 7 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"region:us"
] | null | 2023-06-16T12:24:19Z |
---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# John Durbin's Airoboros 7B GPT4 1.2 GGML
These files are GGML format model files for [John Durbin's Airoboros 7B GPT4 1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GGML)
* [Unquantised fp32 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-7b-gpt4-1.2.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airoboros-7b-gpt4-1.2.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-7b-gpt4-1.2.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-7b-gpt4-1.2.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airoboros-7b-gpt4-1.2.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-7b-gpt4-1.2.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-7b-gpt4-1.2.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airoboros-7b-gpt4-1.2.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airoboros-7b-gpt4-1.2.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-7b-gpt4-1.2.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-7b-gpt4-1.2.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airoboros-7b-gpt4-1.2.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airoboros-7b-gpt4-1.2.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airoboros-7b-gpt4-1.2.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airoboros-7b-gpt4-1.2.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.\nUSER: write a story about llamas\nASSISTANT:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: John Durbin's Airoboros 7B GPT4 1.2
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the previous versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
|
shafin/distilbert-base-uncased-cohl
|
shafin
| 2023-06-16T13:16:07Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T18:25:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-cohl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-cohl
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.6714 | 1.0 | 157 | 6.5491 |
| 6.4508 | 2.0 | 314 | 6.3591 |
| 6.3245 | 3.0 | 471 | 6.2702 |
| 6.2262 | 4.0 | 628 | 6.1747 |
| 6.1619 | 5.0 | 785 | 6.1199 |
| 6.1333 | 6.0 | 942 | 6.0925 |
| 6.1038 | 7.0 | 1099 | 6.0610 |
| 6.0825 | 8.0 | 1256 | 6.0783 |
| 6.0712 | 9.0 | 1413 | 6.0782 |
| 6.0594 | 10.0 | 1570 | 6.0546 |
| 6.0407 | 11.0 | 1727 | 6.0402 |
| 6.036 | 12.0 | 1884 | 6.0381 |
| 6.0332 | 13.0 | 2041 | 6.0056 |
| 6.0243 | 14.0 | 2198 | 6.0319 |
| 6.0156 | 15.0 | 2355 | 6.0127 |
| 6.0234 | 16.0 | 2512 | 6.0173 |
| 6.0071 | 17.0 | 2669 | 5.9917 |
| 6.0029 | 18.0 | 2826 | 5.9979 |
| 6.0012 | 19.0 | 2983 | 5.9878 |
| 5.9949 | 20.0 | 3140 | 5.9695 |
| 5.9894 | 21.0 | 3297 | 5.9852 |
| 5.9846 | 22.0 | 3454 | 5.9776 |
| 5.9766 | 23.0 | 3611 | 5.9655 |
| 5.9787 | 24.0 | 3768 | 5.9602 |
| 5.9717 | 25.0 | 3925 | 5.9889 |
| 5.9733 | 26.0 | 4082 | 5.9699 |
| 5.9655 | 27.0 | 4239 | 5.9611 |
| 5.9737 | 28.0 | 4396 | 5.9804 |
| 5.9605 | 29.0 | 4553 | 5.9618 |
| 5.9623 | 30.0 | 4710 | 5.9489 |
| 5.9588 | 31.0 | 4867 | 5.9630 |
| 5.9537 | 32.0 | 5024 | 5.9625 |
| 5.9536 | 33.0 | 5181 | 5.9692 |
| 5.9489 | 34.0 | 5338 | 5.9739 |
| 5.9424 | 35.0 | 5495 | 5.9553 |
| 5.945 | 36.0 | 5652 | 5.9464 |
| 5.9402 | 37.0 | 5809 | 5.9514 |
| 5.9376 | 38.0 | 5966 | 5.9398 |
| 5.9389 | 39.0 | 6123 | 5.9321 |
| 5.9274 | 40.0 | 6280 | 5.9638 |
| 5.9324 | 41.0 | 6437 | 5.9382 |
| 5.9275 | 42.0 | 6594 | 5.9396 |
| 5.9222 | 43.0 | 6751 | 5.9417 |
| 5.9282 | 44.0 | 6908 | 5.9344 |
| 5.9247 | 45.0 | 7065 | 5.9181 |
| 5.9167 | 46.0 | 7222 | 5.9462 |
| 5.9099 | 47.0 | 7379 | 5.9378 |
| 5.9126 | 48.0 | 7536 | 5.9052 |
| 5.9119 | 49.0 | 7693 | 5.9241 |
| 5.9116 | 50.0 | 7850 | 5.8920 |
| 5.9003 | 51.0 | 8007 | 5.9172 |
| 5.8978 | 52.0 | 8164 | 5.9379 |
| 5.8994 | 53.0 | 8321 | 5.9163 |
| 5.8973 | 54.0 | 8478 | 5.9284 |
| 5.8954 | 55.0 | 8635 | 5.9162 |
| 5.8959 | 56.0 | 8792 | 5.8985 |
| 5.8983 | 57.0 | 8949 | 5.9143 |
| 5.8878 | 58.0 | 9106 | 5.9355 |
| 5.8909 | 59.0 | 9263 | 5.9024 |
| 5.885 | 60.0 | 9420 | 5.9066 |
| 5.8861 | 61.0 | 9577 | 5.8989 |
| 5.8779 | 62.0 | 9734 | 5.9037 |
| 5.8849 | 63.0 | 9891 | 5.8944 |
| 5.8819 | 64.0 | 10048 | 5.9009 |
| 5.885 | 65.0 | 10205 | 5.9051 |
| 5.8747 | 66.0 | 10362 | 5.9144 |
| 5.8746 | 67.0 | 10519 | 5.9108 |
| 5.8682 | 68.0 | 10676 | 5.8830 |
| 5.8763 | 69.0 | 10833 | 5.9133 |
| 5.8664 | 70.0 | 10990 | 5.8987 |
| 5.8683 | 71.0 | 11147 | 5.8863 |
| 5.8675 | 72.0 | 11304 | 5.9088 |
| 5.8713 | 73.0 | 11461 | 5.8645 |
| 5.8584 | 74.0 | 11618 | 5.9043 |
| 5.8657 | 75.0 | 11775 | 5.8824 |
| 5.8648 | 76.0 | 11932 | 5.9092 |
| 5.8634 | 77.0 | 12089 | 5.9003 |
| 5.86 | 78.0 | 12246 | 5.8910 |
| 5.8629 | 79.0 | 12403 | 5.8885 |
| 5.8505 | 80.0 | 12560 | 5.8681 |
| 5.8608 | 81.0 | 12717 | 5.8960 |
| 5.8481 | 82.0 | 12874 | 5.9000 |
| 5.8495 | 83.0 | 13031 | 5.8935 |
| 5.8436 | 84.0 | 13188 | 5.8784 |
| 5.8493 | 85.0 | 13345 | 5.8821 |
| 5.8507 | 86.0 | 13502 | 5.8831 |
| 5.8472 | 87.0 | 13659 | 5.8779 |
| 5.8422 | 88.0 | 13816 | 5.8784 |
| 5.8412 | 89.0 | 13973 | 5.8630 |
| 5.8416 | 90.0 | 14130 | 5.8723 |
| 5.842 | 91.0 | 14287 | 5.8794 |
| 5.8375 | 92.0 | 14444 | 5.8611 |
| 5.8404 | 93.0 | 14601 | 5.8705 |
| 5.8451 | 94.0 | 14758 | 5.8883 |
| 5.8364 | 95.0 | 14915 | 5.8747 |
| 5.8365 | 96.0 | 15072 | 5.8885 |
| 5.8277 | 97.0 | 15229 | 5.8667 |
| 5.8255 | 98.0 | 15386 | 5.8603 |
| 5.8336 | 99.0 | 15543 | 5.8644 |
| 5.826 | 100.0 | 15700 | 5.8725 |
| 5.8223 | 101.0 | 15857 | 5.8714 |
| 5.8415 | 102.0 | 16014 | 5.8773 |
| 5.8286 | 103.0 | 16171 | 5.8704 |
| 5.8281 | 104.0 | 16328 | 5.8732 |
| 5.8246 | 105.0 | 16485 | 5.8582 |
| 5.8267 | 106.0 | 16642 | 5.8603 |
| 5.8176 | 107.0 | 16799 | 5.8751 |
| 5.8214 | 108.0 | 16956 | 5.8774 |
| 5.8115 | 109.0 | 17113 | 5.8826 |
| 5.8205 | 110.0 | 17270 | 5.8516 |
| 5.8136 | 111.0 | 17427 | 5.8743 |
| 5.8166 | 112.0 | 17584 | 5.8555 |
| 5.8171 | 113.0 | 17741 | 5.8695 |
| 5.8176 | 114.0 | 17898 | 5.8531 |
| 5.8108 | 115.0 | 18055 | 5.8570 |
| 5.808 | 116.0 | 18212 | 5.8552 |
| 5.8094 | 117.0 | 18369 | 5.8619 |
| 5.8108 | 118.0 | 18526 | 5.8665 |
| 5.8064 | 119.0 | 18683 | 5.8851 |
| 5.8099 | 120.0 | 18840 | 5.8507 |
| 5.8073 | 121.0 | 18997 | 5.8676 |
| 5.814 | 122.0 | 19154 | 5.8492 |
| 5.8093 | 123.0 | 19311 | 5.8506 |
| 5.8135 | 124.0 | 19468 | 5.8668 |
| 5.8031 | 125.0 | 19625 | 5.8617 |
| 5.801 | 126.0 | 19782 | 5.8626 |
| 5.8019 | 127.0 | 19939 | 5.8472 |
| 5.8106 | 128.0 | 20096 | 5.8429 |
| 5.8013 | 129.0 | 20253 | 5.8668 |
| 5.809 | 130.0 | 20410 | 5.8824 |
| 5.8 | 131.0 | 20567 | 5.8498 |
| 5.8006 | 132.0 | 20724 | 5.8757 |
| 5.8008 | 133.0 | 20881 | 5.8397 |
| 5.7908 | 134.0 | 21038 | 5.8569 |
| 5.7967 | 135.0 | 21195 | 5.8304 |
| 5.7908 | 136.0 | 21352 | 5.8265 |
| 5.7931 | 137.0 | 21509 | 5.8416 |
| 5.7896 | 138.0 | 21666 | 5.8368 |
| 5.7904 | 139.0 | 21823 | 5.8608 |
| 5.791 | 140.0 | 21980 | 5.8369 |
| 5.7887 | 141.0 | 22137 | 5.8705 |
| 5.7817 | 142.0 | 22294 | 5.8713 |
| 5.787 | 143.0 | 22451 | 5.8488 |
| 5.7913 | 144.0 | 22608 | 5.8516 |
| 5.7877 | 145.0 | 22765 | 5.8438 |
| 5.7905 | 146.0 | 22922 | 5.8595 |
| 5.7901 | 147.0 | 23079 | 5.8488 |
| 5.7906 | 148.0 | 23236 | 5.8460 |
| 5.7806 | 149.0 | 23393 | 5.8294 |
| 5.7912 | 150.0 | 23550 | 5.8776 |
| 5.7803 | 151.0 | 23707 | 5.8262 |
| 5.7821 | 152.0 | 23864 | 5.8729 |
| 5.7889 | 153.0 | 24021 | 5.8541 |
| 5.783 | 154.0 | 24178 | 5.8542 |
| 5.7901 | 155.0 | 24335 | 5.8449 |
| 5.7821 | 156.0 | 24492 | 5.8524 |
| 5.7868 | 157.0 | 24649 | 5.8675 |
| 5.7812 | 158.0 | 24806 | 5.8742 |
| 5.7821 | 159.0 | 24963 | 5.8496 |
| 5.7851 | 160.0 | 25120 | 5.8463 |
| 5.7787 | 161.0 | 25277 | 5.8573 |
| 5.7836 | 162.0 | 25434 | 5.8212 |
| 5.7786 | 163.0 | 25591 | 5.8683 |
| 5.7901 | 164.0 | 25748 | 5.8445 |
| 5.7764 | 165.0 | 25905 | 5.8253 |
| 5.7793 | 166.0 | 26062 | 5.8443 |
| 5.7709 | 167.0 | 26219 | 5.8254 |
| 5.7823 | 168.0 | 26376 | 5.8591 |
| 5.7753 | 169.0 | 26533 | 5.8154 |
| 5.7778 | 170.0 | 26690 | 5.8338 |
| 5.7785 | 171.0 | 26847 | 5.8596 |
| 5.7658 | 172.0 | 27004 | 5.8644 |
| 5.7719 | 173.0 | 27161 | 5.8282 |
| 5.781 | 174.0 | 27318 | 5.8451 |
| 5.7806 | 175.0 | 27475 | 5.8407 |
| 5.7798 | 176.0 | 27632 | 5.8622 |
| 5.7772 | 177.0 | 27789 | 5.8445 |
| 5.7686 | 178.0 | 27946 | 5.8529 |
| 5.7738 | 179.0 | 28103 | 5.8474 |
| 5.776 | 180.0 | 28260 | 5.8565 |
| 5.7685 | 181.0 | 28417 | 5.8253 |
| 5.7659 | 182.0 | 28574 | 5.8449 |
| 5.7684 | 183.0 | 28731 | 5.8497 |
| 5.7709 | 184.0 | 28888 | 5.8385 |
| 5.7631 | 185.0 | 29045 | 5.8131 |
| 5.7733 | 186.0 | 29202 | 5.8428 |
| 5.7736 | 187.0 | 29359 | 5.8388 |
| 5.7704 | 188.0 | 29516 | 5.8519 |
| 5.7719 | 189.0 | 29673 | 5.8454 |
| 5.7737 | 190.0 | 29830 | 5.8209 |
| 5.7667 | 191.0 | 29987 | 5.8681 |
| 5.7686 | 192.0 | 30144 | 5.8417 |
| 5.7754 | 193.0 | 30301 | 5.8566 |
| 5.7743 | 194.0 | 30458 | 5.8510 |
| 5.7739 | 195.0 | 30615 | 5.8308 |
| 5.7755 | 196.0 | 30772 | 5.8390 |
| 5.7702 | 197.0 | 30929 | 5.8320 |
| 5.767 | 198.0 | 31086 | 5.8447 |
| 5.7691 | 199.0 | 31243 | 5.8465 |
| 5.7753 | 200.0 | 31400 | 5.8197 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sofia-todeschini/BioBERT-Large-LitCovid-v1.0
|
sofia-todeschini
| 2023-06-16T12:42:56Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T11:00:34Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: BioBERT-Large-LitCovid-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioBERT-Large-LitCovid-v1.0
This model is a fine-tuned version of [dmis-lab/biobert-large-cased-v1.1](https://huggingface.co/dmis-lab/biobert-large-cased-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1058
- F1: 0.8993
- Roc Auc: 0.9361
- Accuracy: 0.7969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1108 | 1.0 | 6240 | 0.1058 | 0.8993 | 0.9361 | 0.7969 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.