modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 12:31:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 12:30:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Christiansg/finetuning-sentiment-amazon-group23
|
Christiansg
| 2023-06-17T22:41:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T21:25:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-amazon-group23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-amazon-group23
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5877
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jvilaseca/Reinforce-Cartpole2
|
jvilaseca
| 2023-06-17T22:21:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T22:17:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Tyrranen/ppo-LunarLander-v2
|
Tyrranen
| 2023-06-17T19:07:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T19:06:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.55 +/- 19.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CreatorPhan/ViQA-small
|
CreatorPhan
| 2023-06-17T18:40:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"vi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T16:58:15Z |
---
language:
- vi
pipeline_tag: text2text-generation
# inference:
# parameters:
# function_to_apply: "none"
widget:
- text: >-
Trả lời câu hỏi: Công dụng của paracetamol?
Trong nội dung:
PARACETAMOL DẠNG UỐNG – HƯỚNG DẪN SỬ DỤNG AN TOÀN, HỢP LÝ
Trong tình hình diễn biến phức tạp của dịch COVID-19, các thuốc giảm đau hạ sốt thông dụng như Paracetamol được người dân mua về dự trữ trong hộp thuốc gia đình với mục đích phòng dịch. Tuy nhiên, việc sử dụng thuốc hợp lý và đúng cách đôi khi chưa được chú ý, vì vậy việc hiểu và sử dụng thuốc Paracetamol an toàn là rất cần thiết.
I. Tổng quan thuốc Paracetamol
- Paracetamol dạng uống là thuốc thuộc nhóm giảm đau, hạ sốt và nằm trong danh mục thuốc không kê đơn của Bộ Y tế. Chính vì vậy Paracetamol rất phổ biến trên thị trường với nhiều chế phẩm có dạng bào chế và hàm lượng từ thấp đến cao.
- Tác dụng chính của Paracetamol là giảm đau, hạ sốt nên thuốc được sử dụng rộng rãi trong điều trị các chứng đau và sốt từ nhẹ đến vừa như: cảm cúm, nhức đầu, đau bụng, đau nhức…
- Thuốc không nên sử dụng cho những người dị ứng với Paracetamol, người suy gan nặng.
II. Nguy cơ khi sử dụng Paracetamol
- Việc Paracetamol được sử dụng rộng rãi cùng với tâm lý chủ quan, thiếu nhận thức dẫn đến việc quá liều thuốc gây nên các tác dụng phụ không mong muốn, trong đó nguy hiểm nhất là tình trạng hoại tử gan, có thể dẫn đến tử vong nếu không được xử trí kịp thời.
- Nguyên nhân gây ngộ độc gan khi sử dụng Paracetamol quá liều là nồng độ NAPQI (sinh ra do Paracetamol chuyển hóa qua gan) không thể chuyển hóa hết và tích luỹ gây độc cho gan.
- Các biểu hiện ngộ độc gan do Paracetamol có thể là: ban đầu là buồn nôn, nôn, đau bụng, sau đó nguy kich hơn có thể kích động, hôn mê, mạch huyết áp không ổn định… có thể nguy cơ tử vong.
- text: >-
Trả lời câu hỏi: Tòa nhà cao nhất Việt Nam? Trong nội dung:
The Landmark 81 là một toà nhà chọc trời trong tổ hợp dự án Vinhomes Tân Cảng , một dự án có tổng mức đầu tư 40.000 tỷ đồng , do Công ty Cổ phần Đầu tư xây dựng Tân Liên Phát thuộc Vingroup làm chủ đầu tư . Toà tháp cao 81 tầng , hiện tại là toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018 .
Toà tháp cao 81 tầng , hiện tại là toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018 . Dự án được xây dựng ở Tân Cảng , quận Bình Thạnh , ven sông Sài Gòn . Dự án được khởi công ngày 26/07/2014 .
---
Mô hình này được tuning từ pretrained ViFlanT5-small model với 77M tham số với 2 epochs trên 87GB text của bộ CC100.
Mô hình được huấn luyện cho tác vụ đọc hiểu tiếng Việt. Cung cấp cho mô hình câu hỏi và ngữ cảnh (không quá 400 từ) và mô hình sẽ trích xuất ra câu trả lời trong ngữ cảnh đó.
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
device = 'cpu'
model_path = "CreatorPhan/ViQA-small"
model = T5ForConditionalGeneration.from_pretrained(model_path).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
context = """
PARACETAMOL DẠNG UỐNG – HƯỚNG DẪN SỬ DỤNG AN TOÀN, HỢP LÝ
Trong tình hình diễn biến phức tạp của dịch COVID-19, các thuốc giảm đau hạ sốt thông dụng như Paracetamol được người dân mua về dự trữ trong hộp thuốc gia đình với mục đích phòng dịch. Tuy nhiên, việc sử dụng thuốc hợp lý và đúng cách đôi khi chưa được chú ý, vì vậy việc hiểu và sử dụng thuốc Paracetamol an toàn là rất cần thiết.
I. Tổng quan thuốc Paracetamol
- Paracetamol dạng uống là thuốc thuộc nhóm giảm đau, hạ sốt và nằm trong danh mục thuốc không kê đơn của Bộ Y tế. Chính vì vậy Paracetamol rất phổ biến trên thị trường với nhiều chế phẩm có dạng bào chế và hàm lượng từ thấp đến cao.
- Tác dụng chính của Paracetamol là giảm đau, hạ sốt nên thuốc được sử dụng rộng rãi trong điều trị các chứng đau và sốt từ nhẹ đến vừa như: cảm cúm, nhức đầu, đau bụng, đau nhức…
- Thuốc không nên sử dụng cho những người dị ứng với Paracetamol, người suy gan nặng.
II. Nguy cơ khi sử dụng Paracetamol
- Việc Paracetamol được sử dụng rộng rãi cùng với tâm lý chủ quan, thiếu nhận thức dẫn đến việc quá liều thuốc gây nên các tác dụng phụ không mong muốn, trong đó nguy hiểm nhất là tình trạng hoại tử gan, có thể dẫn đến tử vong nếu không được xử trí kịp thời.
- Nguyên nhân gây ngộ độc gan khi sử dụng Paracetamol quá liều là nồng độ NAPQI (sinh ra do Paracetamol chuyển hóa qua gan) không thể chuyển hóa hết và tích luỹ gây độc cho gan.
- Các biểu hiện ngộ độc gan do Paracetamol có thể là: ban đầu là buồn nôn, nôn, đau bụng, sau đó nguy kich hơn có thể kích động, hôn mê, mạch huyết áp không ổn định… có thể nguy cơ tử vong.
"""
question = "Công dụng của paracetamol?"
prompt = f"Trả lời câu hỏi: {question} Trong nội dung: {context}"
tokens = tokenizer(prompt, return_tensors='pt').input_ids
output = model.generate(tokens.to(device), max_new_tokens=170)[0]
predict = tokenizer.decode(output, skip_special_tokens=True)
print(len(predict.split()))
print(predict)
```
|
mrm8488/distilgpt2-finetuned-jhegarty-books
|
mrm8488
| 2023-06-17T17:56:10Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-10T11:31:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-jhegarty-books
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-jhegarty-books
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 49 | 3.6667 |
| No log | 2.0 | 98 | 3.6202 |
| No log | 3.0 | 147 | 3.6019 |
| No log | 4.0 | 196 | 3.6008 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mmirmahdi/Taxi-v3
|
mmirmahdi
| 2023-06-17T17:31:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T17:30:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mmirmahdi/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0xghagevaibhav/meMyself
|
0xghagevaibhav
| 2023-06-17T17:11:20Z | 0 | 0 | null |
[
"hi",
"license:unknown",
"region:us"
] | null | 2023-06-17T17:09:19Z |
---
license: unknown
language:
- hi
---
|
erens/mikasalast
|
erens
| 2023-06-17T17:01:46Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T16:46:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mikasaLAST Dreambooth model trained by erens with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pszemraj/long-t5-tglobal-xl-16384-booksci-summary-v1
|
pszemraj
| 2023-06-17T16:51:52Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:pszemraj/scientific_lay_summarisation-elife-norm",
"license:bsd-3-clause",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T12:23:26Z |
---
license:
- bsd-3-clause
- apache-2.0
tags:
- generated_from_trainer
datasets:
- pszemraj/scientific_lay_summarisation-elife-norm
metrics:
- rouge
model-index:
- name: >-
long-t5-tglobal-xl-16384-book-summary-scientific_lay_summarisation-elife-norm-16384-summ-v1
results:
- task:
name: Summarization
type: summarization
dataset:
name: pszemraj/scientific_lay_summarisation-elife-norm
type: pszemraj/scientific_lay_summarisation-elife-norm
split: validation
metrics:
- name: Rouge1
type: rouge
value: 47.4591
language:
- en
library_name: transformers
inference: False
---
# long-t5-tglobal-xl-16384-booksci-summary-v1
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-xl-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) on the pszemraj/scientific_lay_summarisation-elife-norm dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7518
- Rouge1: 47.4591
- Rouge2: 12.7287
- Rougel: 21.5549
- Rougelsum: 44.8709
- Gen Len: 384.39
## Model description
An experiment of further fine-tuning a booksum model on a different dataset. Compare to either the starting checkpoint (_linked above_) or to the [variant only fine-tuned on the scientific lay summaries](https://huggingface.co/pszemraj/long-t5-tglobal-xl-sci-simplify-elife).
## Intended uses & limitations
More information needed
## Training and evaluation data
the pszemraj/scientific_lay_summarisation-elife-norm dataset, input 16384 tokens then truncate, output 1024 tokens then truncate.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 878
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9629 | 1.0 | 543 | 1.7637 | 46.6926 | 12.4769 | 21.4364 | 44.4329 | 381.23 |
| 1.8555 | 2.0 | 1086 | 1.7518 | 47.4591 | 12.7287 | 21.5549 | 44.8709 | 384.39 |
|
vlkn/flan-t5-small-taboo-for-llms
|
vlkn
| 2023-06-17T16:20:59Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-03T13:32:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-small-taboo-for-llms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-taboo-for-llms
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4825
- Rouge1: 27.3897
- Rouge2: 9.9232
- Rougel: 24.2026
- Rougelsum: 24.6485
- Gen Len: 18.5172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 137 | 2.5897 | 26.6789 | 9.9538 | 23.6637 | 24.2407 | 18.3621 |
| No log | 2.0 | 274 | 2.5560 | 25.4162 | 9.6277 | 22.7084 | 23.0883 | 18.3966 |
| No log | 3.0 | 411 | 2.5377 | 26.0239 | 9.7748 | 23.4425 | 23.7935 | 18.6034 |
| 2.8204 | 4.0 | 548 | 2.5241 | 26.6294 | 9.9168 | 23.8023 | 24.2756 | 18.7241 |
| 2.8204 | 5.0 | 685 | 2.5120 | 25.8274 | 9.9333 | 23.8865 | 24.0724 | 18.7586 |
| 2.8204 | 6.0 | 822 | 2.5031 | 26.7774 | 9.9651 | 24.3654 | 24.6102 | 18.6034 |
| 2.8204 | 7.0 | 959 | 2.4985 | 26.5058 | 10.0422 | 24.0403 | 24.635 | 18.4655 |
| 2.6101 | 8.0 | 1096 | 2.4934 | 26.6953 | 9.9536 | 24.0293 | 24.6809 | 18.4655 |
| 2.6101 | 9.0 | 1233 | 2.4907 | 26.7978 | 9.6249 | 23.714 | 23.9992 | 18.6034 |
| 2.6101 | 10.0 | 1370 | 2.4847 | 27.2135 | 9.878 | 23.8398 | 24.2389 | 18.5 |
| 2.4726 | 11.0 | 1507 | 2.4856 | 27.1799 | 9.9337 | 23.9393 | 24.4067 | 18.5172 |
| 2.4726 | 12.0 | 1644 | 2.4835 | 27.4491 | 10.1828 | 24.0926 | 24.4819 | 18.5 |
| 2.4726 | 13.0 | 1781 | 2.4825 | 27.3897 | 9.9232 | 24.2026 | 24.6485 | 18.5172 |
| 2.4726 | 14.0 | 1918 | 2.4836 | 27.5567 | 10.7405 | 24.2497 | 24.6566 | 18.5345 |
| 2.3731 | 15.0 | 2055 | 2.4872 | 27.7517 | 11.0182 | 24.1007 | 24.7218 | 18.4828 |
| 2.3731 | 16.0 | 2192 | 2.4852 | 27.3461 | 11.3381 | 24.084 | 24.5125 | 18.4655 |
| 2.3731 | 17.0 | 2329 | 2.4872 | 27.3558 | 11.1005 | 24.047 | 24.4973 | 18.4655 |
| 2.3731 | 18.0 | 2466 | 2.4841 | 26.9427 | 10.9288 | 23.7324 | 24.4298 | 18.5345 |
| 2.2967 | 19.0 | 2603 | 2.4881 | 27.5 | 10.8437 | 24.1593 | 24.6028 | 18.4483 |
| 2.2967 | 20.0 | 2740 | 2.4908 | 27.517 | 11.0039 | 24.1049 | 24.7111 | 18.5 |
| 2.2967 | 21.0 | 2877 | 2.4917 | 27.7333 | 10.935 | 24.4076 | 24.9887 | 18.4138 |
| 2.2553 | 22.0 | 3014 | 2.4926 | 27.6275 | 10.7562 | 24.2295 | 24.7476 | 18.4138 |
| 2.2553 | 23.0 | 3151 | 2.4945 | 27.9085 | 10.943 | 24.6135 | 25.2373 | 18.4138 |
| 2.2553 | 24.0 | 3288 | 2.4948 | 27.5261 | 10.7141 | 24.2429 | 24.816 | 18.4138 |
| 2.2553 | 25.0 | 3425 | 2.4931 | 27.5522 | 10.8702 | 24.5576 | 25.0714 | 18.4655 |
| 2.213 | 26.0 | 3562 | 2.4942 | 27.4758 | 11.0064 | 24.5062 | 25.05 | 18.4655 |
| 2.213 | 27.0 | 3699 | 2.4954 | 27.6967 | 11.1744 | 24.7646 | 25.3172 | 18.4655 |
| 2.213 | 28.0 | 3836 | 2.4951 | 27.7428 | 10.9365 | 24.6427 | 25.2432 | 18.5172 |
| 2.213 | 29.0 | 3973 | 2.4949 | 27.6877 | 10.9522 | 24.6101 | 25.2471 | 18.4655 |
| 2.1865 | 30.0 | 4110 | 2.4952 | 27.7295 | 11.0173 | 24.6556 | 25.2397 | 18.4655 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
aphi/q-FrozenLake-v1-4x4-noSlippery
|
aphi
| 2023-06-17T15:42:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T15:42:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aphi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
l3cube-pune/assamese-bert
|
l3cube-pune
| 2023-06-17T15:38:14Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"as",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T08:12:25Z |
---
license: cc-by-4.0
language: as
---
## AssameseBERT
AssameseBERT is an Assamese BERT model trained on publicly available Assamese monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/bengali-bert
|
l3cube-pune
| 2023-06-17T15:37:35Z | 163 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"bn",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T08:45:01Z |
---
license: cc-by-4.0
language: bn
---
## BengaliBERT
BengaliBERT is a Bengali BERT model trained on publicly available Bengali monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
paulowoicho/t5-podcast-summarisation
|
paulowoicho
| 2023-06-17T15:36:56Z | 154 | 8 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarisation",
"lm-head",
"en",
"arxiv:2004.04270",
"arxiv:1910.10683",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
datasets:
- Spotify Podcasts Dataset
tags:
- t5
- summarisation
- pytorch
- lm-head
metrics:
- ROUGE
pipeline:
- summarisation
---
# T5 for Automatic Podcast Summarisation
This model is the result of fine-tuning [t5-base](https://huggingface.co/t5-base) on the [Spotify Podcast Dataset](https://arxiv.org/abs/2004.04270).
It is based on [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) which was pretrained on the [C4 dataset](https://huggingface.co/datasets/c4).
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu
## Intended uses & limitations
This model is intended to be used for automatic podcast summarisation. As creator provided descriptions
were used for training, the model also learned to generate promotional material (links, hashtags, etc) in its summaries, as such
some post processing may be required on the model's outputs.
If using on Colab, the instance will crash if the number of tokens in the transcript exceeds 7000. I discovered that the model
generated reasonable summaries even when the podcast transcript was truncated to reduce the number of tokens.
#### How to use
The model can be used with the summarisation as follows:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="paulowoicho/t5-podcast-summarisation", tokenizer="paulowoicho/t5-podcast-summarisation")
summary = summarizer(podcast_transcript, min_length=5, max_length=20)
print(summary[0]['summary_text'])
```
## Training data
This model is the result of fine-tuning [t5-base](https://huggingface.co/t5-base) on the [Spotify Podcast Dataset](https://arxiv.org/abs/2004.04270).
[Pre-processing](https://github.com/paulowoicho/msc_project/blob/master/reformat.py) was done on the original data before fine-tuning.
## Training procedure
Training was largely based on [Fine-tune T5 for Summarization](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) by [Abhishek Kumar Mishra](https://github.com/abhimishra91)
|
l3cube-pune/odia-bert
|
l3cube-pune
| 2023-06-17T15:36:40Z | 457 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"or",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T08:26:35Z |
---
license: cc-by-4.0
language: or
---
## OdiaBERT
OdiaBERT is an Odia BERT model trained on publicly available Odia monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/tamil-bert
|
l3cube-pune
| 2023-06-17T15:36:00Z | 554 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"ta",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T07:45:13Z |
---
license: cc-by-4.0
language: ta
---
## TamilBERT
TamilBERT is a Tamil BERT model trained on publicly available Tamil monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/malayalam-bert
|
l3cube-pune
| 2023-06-17T15:35:26Z | 417 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"ml",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T07:31:42Z |
---
license: cc-by-4.0
language: ml
---
## MalayalamBERT
MalayalamBERT is a Malayalam BERT model trained on publicly available Malayalam monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/hindi-roberta
|
l3cube-pune
| 2023-06-17T15:31:32Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-19T18:21:53Z |
---
license: cc-by-4.0
language: hi
---
## HindRoBERTa
HindRoBERTa is a Hindi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/marathi-bert-v2
|
l3cube-pune
| 2023-06-17T15:30:14Z | 391 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-16T17:52:15Z |
---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaBERT
MahaBERT is a Marathi BERT model. It is a multilingual BERT (google/muril-base-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@inproceedings{joshi-2022-l3cube,
title = "{L}3{C}ube-{M}aha{C}orpus and {M}aha{BERT}: {M}arathi Monolingual Corpus, {M}arathi {BERT} Language Models, and Resources",
author = "Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.17",
pages = "97--101",
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
xqs/ppo-LunarLander-v2
|
xqs
| 2023-06-17T15:27:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T15:26:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.17 +/- 23.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Benned/TestLyco
|
Benned
| 2023-06-17T15:24:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T15:22:06Z |
---
license: creativeml-openrail-m
---
|
atrytone/scibert_uncased_claim_id
|
atrytone
| 2023-06-17T15:16:47Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T05:07:15Z |
---
license: apache-2.0
language:
- en
---
Fine-tuned SciBERT uncased model [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) for claim detection from abstracts.
|
biodatlab/MIReAD-Neuro
|
biodatlab
| 2023-06-17T15:16:26Z | 122 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-09T05:23:59Z |
---
language:
- en
pipeline_tag: text-classification
metrics:
- f1
- accuracy
- recall
- precision
library_name: transformers
widget:
- text: The past 25 years have seen a strong increase in the number of publications related to criticality in different areas of neuroscience. The potential of criticality to explain various brain properties, including optimal information processing, has made it an increasingly exciting area of investigation for neuroscientists. Recent reviews on this topic, sometimes termed brain criticality, make brief mention of clinical applications of these findings to several neurological disorders such as epilepsy, neurodegenerative disease, and neonatal hypoxia. Other clinicallyrelevant domains - including anesthesia, sleep medicine, developmental-behavioral pediatrics, and psychiatry - are seldom discussed in review papers of brain criticality. Thorough assessments of these application areas and their relevance for clinicians have also yet to be published. In this scoping review, studies of brain criticality involving human data of all ages are evaluated for their current and future clinical relevance. To make the results of these studies understandable to a more clinical audience, a review of the key concepts behind criticality (e.g., phase transitions, long-range temporal correlation, self-organized criticality, power laws, branching processes) precedes the discussion of human clinical studies. Open questions and forthcoming areas of investigation are also considered.
---
# MIReAD Neuro
This model is a fine-tuned version of [arazd/MIReAD](https://huggingface.co/arazd/MIReAD) on a dataset of Neuroscience papers from 200 journals collected from various sources for a journal classification task.
It achieves the following results on the evaluation set:
- Loss: 2.7117
- Accuracy: 0.4011
- F1: 0.3962
- Precision: 0.4066
- Recall: 0.3999
## Model description
This model was trained on a journal classification task.
## Intended uses & limitations
The intended use of this model is to create abstract embeddings for semantic similarity search for neuroscience-related articles.
## Model Usage
To load the model:
```py
from transformers import BertForSequenceClassification, AutoTokenizer
model_path = "biodatlab/MIReAD-Neuro"
model = BertForSequenceClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
```
To create embeddings and for classification:
```py
# sample abstract & title text
title = "Why Brain Criticality Is Clinically Relevant: A Scoping Review."
abstract = "The past 25 years have seen a strong increase in the number of publications related to criticality in different areas of neuroscience..."
text = title + tokenizer.sep_token + abstract
tokens = tokenizer(
text,
max_length=512,
padding=True,
truncation=True,
return_tensors="pt"
)
# to generate an embedding from a given title and abstract
with torch.no_grad():
output = model.bert(**tokens)
embedding = output.last_hidden_state[:, 0, :]
# to classify (200 journals) a given title and abstract
output = model(**tokens)
class = output.logits
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- num_epochs: 6
|
sttteephen/ppo-LunarLander-v2
|
sttteephen
| 2023-06-17T14:47:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T14:47:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.73 +/- 23.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JCTN/controlnet_qrcode-control_v11p_sd21
|
JCTN
| 2023-06-17T14:45:13Z | 14 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"image-to-image",
"en",
"license:openrail++",
"region:us"
] |
image-to-image
| 2023-06-16T21:21:29Z |
---
tags:
- stable-diffusion
- controlnet
- image-to-image
license: openrail++
language:
- en
pipeline_tag: image-to-image
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 2.1

## Model Description
This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v2.1.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version.
## How to use with diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v11p_sd21",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
|
Bala-A87/Huggy-DRL
|
Bala-A87
| 2023-06-17T14:35:21Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-17T14:34:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bala-A87/Huggy-DRL
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VTSTech/Desktop-GPT-111m
|
VTSTech
| 2023-06-17T14:30:24Z | 146 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-17T16:29:06Z |
This is my first attempt at training a model. Based on Cerebras-GPT-111m - Trained on a Conversational Dataset with some Q/A
Trained using code from https://github.com/Dampish0/ModelTrainingLocal
My homepage: https://www.vts-tech.org
My Github: https://github.com/Veritas83
---
license: cc-by-nc-4.0
---
tags:
- CasualLM
- AutoModel
- AutoTokenizer
- text-generation
- question-answer
---
|
mtebad/classification_model
|
mtebad
| 2023-06-17T13:51:57Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T09:59:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: classification_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.937
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Accuracy: 0.937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2461 | 1.0 | 1000 | 0.1964 | 0.9265 |
| 0.1464 | 2.0 | 2000 | 0.1669 | 0.937 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
thiendio/rl_course_vizdoom_health_gathering_supreme
|
thiendio
| 2023-06-17T13:49:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:47:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 7.95 +/- 1.58
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r thiendio/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
catrabbitbear/lunar-lander
|
catrabbitbear
| 2023-06-17T13:43:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:43:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.65 +/- 38.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
2tle/kobart-std-to-jeju
|
2tle
| 2023-06-17T13:41:43Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T13:31:26Z |
---
license: mit
language:
- ko
metrics:
- bleu
---
# Korean Standard To Jejueo(Jeju Dialect) Translator BART Model
## Dataset
- [AI Hub Korean Jejueo(Jeju Dialect) Voice data](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=121)
## Model Score
- BLEU: 40%
|
tux/Reinforce-copter2
|
tux
| 2023-06-17T13:38:46Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:38:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copter2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.20 +/- 15.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SikongSphere/sikong-alpaca-7b-chinese
|
SikongSphere
| 2023-06-17T13:30:51Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T12:19:19Z |
---
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune
This model is a fine-tuned version of [/root/autodl-tmp/sikong/repo/LMFlow/output_models/chinese-alpaca-7b-merged](https://huggingface.co//root/autodl-tmp/sikong/repo/LMFlow/output_models/chinese-alpaca-7b-merged) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
sd-dreambooth-library/baysafinal
|
sd-dreambooth-library
| 2023-06-17T13:30:08Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T13:28:10Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: Baysafinal1
---
### Baysafinal Dreambooth model trained by LabanAsmar with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
Baysafinal1 (use that on your prompt)

|
antphb/DS-Chatbox-bigscience-bloom-560m
|
antphb
| 2023-06-17T13:15:50Z | 151 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T11:27:42Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-bigscience-bloom-560m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-bigscience-bloom-560m
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.8320
- eval_runtime: 175.7948
- eval_samples_per_second: 37.402
- eval_steps_per_second: 4.676
- epoch: 0.03
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Vas123/my_awesome_mind_model
|
Vas123
| 2023-06-17T12:49:22Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-14T14:38:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6448
- Accuracy: 0.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6343 | 0.1150 |
| No log | 1.87 | 7 | 2.6413 | 0.0973 |
| 2.636 | 2.93 | 11 | 2.6433 | 0.0796 |
| 2.636 | 4.0 | 15 | 2.6424 | 0.0708 |
| 2.636 | 4.8 | 18 | 2.6433 | 0.0619 |
| 2.6231 | 5.87 | 22 | 2.6456 | 0.0354 |
| 2.6231 | 6.93 | 26 | 2.6451 | 0.0619 |
| 2.6184 | 8.0 | 30 | 2.6448 | 0.0531 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thiendio/ppo-from-scratch-lunar
|
thiendio
| 2023-06-17T12:26:39Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T12:26:16Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -158.29 +/- 101.80
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
alexanderjoossens/w2v2-libri-10min
|
alexanderjoossens
| 2023-06-17T12:16:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-22T09:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2-libri-10min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-libri-10min
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
SikongSphere/sikong-llama-7b-chinese
|
SikongSphere
| 2023-06-17T12:01:59Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T09:09:19Z |
---
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune
This model is a fine-tuned version of [/root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf](https://huggingface.co//root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
jalFaizy/ppo-lunar
|
jalFaizy
| 2023-06-17T11:42:59Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:42:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: trial1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.71 +/- 14.10
name: mean_reward
verified: false
---
# **trial1** Agent playing **LunarLander-v2**
This is a trained model of a **trial1** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Xavia0012/bert-tomi
|
Xavia0012
| 2023-06-17T11:02:13Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-12T19:49:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-tomi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tomi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 1.5815 |
| No log | 2.0 | 40 | 0.7518 |
| No log | 3.0 | 60 | 0.7153 |
| No log | 4.0 | 80 | 0.6354 |
| No log | 5.0 | 100 | 0.5895 |
| No log | 6.0 | 120 | 0.4882 |
| No log | 7.0 | 140 | 0.4590 |
| No log | 8.0 | 160 | 0.4303 |
| No log | 9.0 | 180 | 0.4644 |
| No log | 10.0 | 200 | 0.4416 |
| No log | 11.0 | 220 | 0.4348 |
| No log | 12.0 | 240 | 0.5306 |
| No log | 13.0 | 260 | 0.4412 |
| No log | 14.0 | 280 | 0.4053 |
| No log | 15.0 | 300 | 0.4185 |
| No log | 16.0 | 320 | 0.3982 |
| No log | 17.0 | 340 | 0.4291 |
| No log | 18.0 | 360 | 0.4316 |
| No log | 19.0 | 380 | 0.4328 |
| No log | 20.0 | 400 | 0.4198 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
mattladewig/distilbert-base-uncased-finetuned-ner
|
mattladewig
| 2023-06-17T10:34:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-17T08:37:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mattladewig/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mattladewig/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0342
- Validation Loss: 0.0614
- Train Precision: 0.9248
- Train Recall: 0.9365
- Train F1: 0.9306
- Train Accuracy: 0.9833
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1951 | 0.0694 | 0.9087 | 0.9181 | 0.9134 | 0.9799 | 0 |
| 0.0530 | 0.0621 | 0.9246 | 0.9301 | 0.9273 | 0.9823 | 1 |
| 0.0342 | 0.0614 | 0.9248 | 0.9365 | 0.9306 | 0.9833 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Chetna19/distilbert-base-uncased-distilled-squad_qa_model
|
Chetna19
| 2023-06-17T10:13:01Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:subjqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-11T13:02:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- subjqa
model-index:
- name: distilbert-base-uncased-distilled-squad_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-squad_qa_model
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the subjqa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1556 | 1.0 | 32 | 4.1242 |
| 4.0411 | 2.0 | 64 | 4.0582 |
| 3.9828 | 3.0 | 96 | 3.9948 |
| 3.9068 | 4.0 | 128 | 3.9378 |
| 3.8152 | 5.0 | 160 | 3.8835 |
| 3.7906 | 6.0 | 192 | 3.8329 |
| 3.7543 | 7.0 | 224 | 3.7842 |
| 3.7173 | 8.0 | 256 | 3.7377 |
| 3.6717 | 9.0 | 288 | 3.6958 |
| 3.6219 | 10.0 | 320 | 3.6559 |
| 3.587 | 11.0 | 352 | 3.6185 |
| 3.6111 | 12.0 | 384 | 3.5808 |
| 3.5374 | 13.0 | 416 | 3.5483 |
| 3.4506 | 14.0 | 448 | 3.5175 |
| 3.4286 | 15.0 | 480 | 3.4873 |
| 3.4021 | 16.0 | 512 | 3.4596 |
| 3.432 | 17.0 | 544 | 3.4328 |
| 3.3235 | 18.0 | 576 | 3.4079 |
| 3.3627 | 19.0 | 608 | 3.3841 |
| 3.323 | 20.0 | 640 | 3.3615 |
| 3.3127 | 21.0 | 672 | 3.3389 |
| 3.2635 | 22.0 | 704 | 3.3199 |
| 3.2542 | 23.0 | 736 | 3.3013 |
| 3.2302 | 24.0 | 768 | 3.2846 |
| 3.1699 | 25.0 | 800 | 3.2676 |
| 3.2333 | 26.0 | 832 | 3.2516 |
| 3.2204 | 27.0 | 864 | 3.2364 |
| 3.1809 | 28.0 | 896 | 3.2218 |
| 3.1739 | 29.0 | 928 | 3.2082 |
| 3.1966 | 30.0 | 960 | 3.1950 |
| 3.1513 | 31.0 | 992 | 3.1826 |
| 3.135 | 32.0 | 1024 | 3.1713 |
| 3.1253 | 33.0 | 1056 | 3.1599 |
| 3.0768 | 34.0 | 1088 | 3.1498 |
| 3.1031 | 35.0 | 1120 | 3.1394 |
| 3.064 | 36.0 | 1152 | 3.1293 |
| 3.0391 | 37.0 | 1184 | 3.1200 |
| 3.0701 | 38.0 | 1216 | 3.1117 |
| 3.0787 | 39.0 | 1248 | 3.1032 |
| 3.0423 | 40.0 | 1280 | 3.0956 |
| 3.0214 | 41.0 | 1312 | 3.0875 |
| 3.0289 | 42.0 | 1344 | 3.0804 |
| 2.9667 | 43.0 | 1376 | 3.0736 |
| 3.0341 | 44.0 | 1408 | 3.0671 |
| 3.0098 | 45.0 | 1440 | 3.0606 |
| 3.0202 | 46.0 | 1472 | 3.0544 |
| 2.9598 | 47.0 | 1504 | 3.0490 |
| 2.9734 | 48.0 | 1536 | 3.0430 |
| 2.9381 | 49.0 | 1568 | 3.0375 |
| 2.9444 | 50.0 | 1600 | 3.0328 |
| 2.9357 | 51.0 | 1632 | 3.0280 |
| 2.9453 | 52.0 | 1664 | 3.0237 |
| 2.9906 | 53.0 | 1696 | 3.0191 |
| 2.934 | 54.0 | 1728 | 3.0148 |
| 2.9076 | 55.0 | 1760 | 3.0110 |
| 2.9874 | 56.0 | 1792 | 3.0070 |
| 2.9682 | 57.0 | 1824 | 3.0032 |
| 2.9287 | 58.0 | 1856 | 2.9994 |
| 2.9575 | 59.0 | 1888 | 2.9956 |
| 2.8618 | 60.0 | 1920 | 2.9926 |
| 2.9614 | 61.0 | 1952 | 2.9893 |
| 2.9463 | 62.0 | 1984 | 2.9861 |
| 2.8927 | 63.0 | 2016 | 2.9834 |
| 2.9048 | 64.0 | 2048 | 2.9805 |
| 2.9161 | 65.0 | 2080 | 2.9777 |
| 2.9117 | 66.0 | 2112 | 2.9753 |
| 2.932 | 67.0 | 2144 | 2.9729 |
| 2.9148 | 68.0 | 2176 | 2.9706 |
| 2.8919 | 69.0 | 2208 | 2.9683 |
| 2.9278 | 70.0 | 2240 | 2.9662 |
| 2.869 | 71.0 | 2272 | 2.9643 |
| 2.8844 | 72.0 | 2304 | 2.9622 |
| 2.8636 | 73.0 | 2336 | 2.9603 |
| 2.8734 | 74.0 | 2368 | 2.9585 |
| 2.8934 | 75.0 | 2400 | 2.9569 |
| 2.86 | 76.0 | 2432 | 2.9551 |
| 2.8366 | 77.0 | 2464 | 2.9539 |
| 2.8887 | 78.0 | 2496 | 2.9522 |
| 2.8632 | 79.0 | 2528 | 2.9511 |
| 2.8691 | 80.0 | 2560 | 2.9496 |
| 2.8597 | 81.0 | 2592 | 2.9484 |
| 2.8775 | 82.0 | 2624 | 2.9473 |
| 2.8491 | 83.0 | 2656 | 2.9461 |
| 2.8639 | 84.0 | 2688 | 2.9450 |
| 2.8659 | 85.0 | 2720 | 2.9443 |
| 2.8557 | 86.0 | 2752 | 2.9433 |
| 2.8188 | 87.0 | 2784 | 2.9423 |
| 2.8896 | 88.0 | 2816 | 2.9416 |
| 2.8102 | 89.0 | 2848 | 2.9409 |
| 2.8452 | 90.0 | 2880 | 2.9403 |
| 2.8437 | 91.0 | 2912 | 2.9399 |
| 2.8193 | 92.0 | 2944 | 2.9397 |
| 2.8645 | 93.0 | 2976 | 2.9391 |
| 2.8745 | 94.0 | 3008 | 2.9388 |
| 2.8568 | 95.0 | 3040 | 2.9385 |
| 2.8832 | 96.0 | 3072 | 2.9382 |
| 2.8801 | 97.0 | 3104 | 2.9382 |
| 2.8488 | 98.0 | 3136 | 2.9383 |
| 2.8233 | 99.0 | 3168 | 2.9380 |
| 2.8505 | 100.0 | 3200 | 2.9380 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.0a0+d321be6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kevinng77/unsup_bert_L3
|
kevinng77
| 2023-06-17T10:00:06Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T08:47:52Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
```python
# transformers==4.29.1
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSequenceClassification
onnx_model_path = "kevinng77/unsup_bert_L3"
tokenizer = AutoTokenizer.from_pretrained(onnx_model_path)
onnx_model = ORTModelForSequenceClassification.from_pretrained(onnx_model_path)
onnx_pipe = pipeline(task="text-classification", model=onnx_model, tokenizer=tokenizer)
onnx_pipe("How many rows are there in the table?")
```
|
hts98/whisper-tiny-paper
|
hts98
| 2023-06-17T09:45:10Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-16T15:36:23Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-paper
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6807
- Wer: 50.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 72 | 0.6515 | 50.3886 |
| No log | 2.0 | 144 | 0.6566 | 50.8012 |
| No log | 3.0 | 216 | 0.6624 | 50.3713 |
| No log | 4.0 | 288 | 0.6684 | 50.8026 |
| No log | 5.0 | 360 | 0.6807 | 50.8558 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.7.0
- Tokenizers 0.13.3
|
ganghe74/distilbert-base-uncased-finetuned-emotion
|
ganghe74
| 2023-06-17T09:34:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T09:13:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.922469380812715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8057 | 1.0 | 250 | 0.3170 | 0.905 | 0.9023 |
| 0.242 | 2.0 | 500 | 0.2170 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
edu-linguistic/opt-1.3b-edu-sft
|
edu-linguistic
| 2023-06-17T09:28:57Z | 0 | 0 | null |
[
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:Nebulous/gpt4all_pruned",
"region:us"
] | null | 2023-06-15T14:16:11Z |
---
datasets:
- yahma/alpaca-cleaned
- Nebulous/gpt4all_pruned
language:
- en
---
## Inference Example:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "edu-linguistic/opt-1.3b-edu-sft"
model_name = 'facebook/opt-1.3b'
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(model_name)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "<|prompter|> Consider the following function: f(x1, x2) = ln(x1). This function is…"
question = tokenizer.encode(question, return_tensors='pt')
generation_kwargs = {
"do_sample": True,
"top_k": 0,
"top_p": 0.9,
"bos_token_id": tokenizer.bos_token_id,
"pad_token_id": tokenizer.pad_token_id,
"eos_token_id": tokenizer.eos_token_id,
"num_return_sequences": 1,
"min_new_tokens": 10,
"max_new_tokens": 512,
}
response = model.generate(input_ids=question, **generation_kwargs)
response = tokenizer.decode(response[0],
skip_special_tokens=False,
clean_up_tokenization_spaces=False
)
print(response)
```
|
coyude/Nous-Hermes-13b-Chinese-GGML
|
coyude
| 2023-06-17T09:28:23Z | 0 | 22 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-11T03:42:04Z |
---
license: apache-2.0
language:
- zh
- en
library_name: transformers
pipeline_tag: text-generation
---
原始模型:https://huggingface.co/NousResearch/Nous-Hermes-13b
lora:https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
将Nous-Hermes-13b与chinese-alpaca-lora-13b进行合并,增强模型的中文能力,~~不过存在翻译腔~~
使用项目:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/ggerganov/llama.cpp
**推荐q5_k_m或q4_k_m 该仓库模型均为ggmlv3模型**
Text-generation-webui懒人包:
https://www.bilibili.com/read/cv23495183
---------------------------------------------------------------------------------------------
Original model: https://huggingface.co/NousResearch/Nous-Hermes-13b
Lora: https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
The Nous-Hermes-13b model is merged with the chinese-alpaca-lora-13b model to enhance the Chinese language capability of the model, ~~although it may exhibit a translation style.~~
Usage projects:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/ggerganov/llama.cpp
**q5_k_m or q4_k_m is recommended. All models in this repository are ggmlv3 models.**
|
parkyunmin/my_awesome_eli5_clm-model
|
parkyunmin
| 2023-06-17T09:09:15Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T05:54:26Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 49 | 1.6679 |
| No log | 2.0 | 98 | 1.5629 |
| No log | 3.0 | 147 | 1.5380 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
PabloQuant29/ppo-LunarLander-v2
|
PabloQuant29
| 2023-06-17T08:36:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T08:35:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.46 +/- 18.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AriesChen/GeoLLM
|
AriesChen
| 2023-06-17T08:32:06Z | 195 | 3 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-06-17T08:30:04Z |
# GeoLLM
**Large Language Model for Geology**
Large language models are used to organize geology-related knowledge (geology, geophysics, geophysical logging, etc.). This version uses the [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) base model and fine-tunes it using P-tuning.
---
### Sedimentology
Sedimentology, the study of sedimentary rocks and the processes by which they are formed, includes and is related to a large number of phenomena. Sedimentology includes the five fundamental processes defined by the term sediaentation --weathering, erosion, transportation, deposition and diagenesis.
**Datasets:**《沉积岩石学(第四版)》 朱筱敏
**Model:** ChatGLM-6B
**Fine-tuning:** P-Tuning v2
**Before fine-tuning**
```
response, history = model.chat(tokenizer, "什么是沉积岩石学?", history=[])
response
沉积岩石学是一门研究沉积岩的形成、结构和成分的学科,主要关注地球表面上不同条件下的沉积过程和岩石形成机制,包括岩浆沉积、冰川沉积、洪水沉积、海洋沉积等。沉积岩石学在地质学、地球物理学、地球化学、材料科学等领域都有广泛应用,因为沉积岩是许多自然和人工地质工程的基础,如地质勘探、矿产资源开发、土木工程、环境科学等。沉积岩石学的研究对象包括沉积岩的地质特征、成分和构造特征,以及沉积岩与地壳、岩浆和变质岩的关系。研究方法包括沉积岩分析、岩相学分析、岩浆动力学分析等。
```
**After fine-tuning**
```
response, history = model.chat(tokenizer, "什么是沉积岩石学?", history=[])
response
沉积岩石学是研究沉积岩的物质成分、结构构造、岩石类型、沉积物沉积作用和沉积物质形成环境以及沉积岩分布规律的一门科学。
```
**Error Analysis:** We meticulously refined the model by approximately 500 entries from academic textbooks, subsequently applying P-Tuning v2 for optimization. Detailed control of parameters was not conducted for the time being. Given the scarcity of the training data and the fine-tuning parameters, the outcomes might exhibit some irregularities.
**Results Analysis:** It is evident that the fine-tuned model shows enhanced reliability(more precise and concise) when providing answers within specialized knowledge domains. Moving forward, we will persist in enriching our training data and optimizing our fine-tuning methodologies in order to yield superior results.
---
### TODO
1. Geophysical Exploration
2. Geophysical logging
3. Petroleum Geology
etc...
---
### Related Resources
1. [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B): ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters.
|
okazaki-lab/ss_wsd
|
okazaki-lab
| 2023-06-17T08:21:08Z | 0 | 0 |
transformers
|
[
"transformers",
"word_sense_disambiguation",
"en",
"dataset:SemCor",
"dataset:WordNet",
"dataset:WSD_Evaluation_Framework",
"arxiv:2304.11340",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-17T07:20:37Z |
---
license: apache-2.0
language:
- en
tags:
- word_sense_disambiguation
library_name: transformers
datasets:
- SemCor
- WordNet
- WSD_Evaluation_Framework
metrics:
- f1
---
# Semantic Specialization for Knowledge-based Word Sense Disambiguation
* This repository contains the trained model (projection heads) and sense/context embeddings used for training and evaluating the model.
* If you want to learn how to use these files, please refer to the [semantic_specialization_for_wsd](https://github.com/s-mizuki-nlp/semantic_specialization_for_wsd) repository.
## Trained Model (Projection Heads)
* File: checkpoints/baseline/last.ckpt
* This is one of the trained models used for reporting the main results (Table 2 in [Mizuki and Okazaki, EACL2023]).
NOTE: Five runs were performed in total.
* The main hyperparameters used for training are as follows:
| Argument name | Value | Description |
|----------------------------------------------------------------|----------------------------|------------------------------------------------------------------------------------|
| max_epochs | 15 | Maximum number of training epochs |
| cfg_similarity_class.temperature ($\beta^{-1}$) | 0.015625 (=1/64) | Temperature parameter for the contrastive loss |
| batch_size ($N_B$) | 256 | Number of samples in each batch for the attract-repel and self-training objectives |
| coef_max_pool_margin_loss ($\alpha$) | 0.2 | Coefficient for the self-training loss |
| cfg_gloss_projection_head.n_layer | 2 | Number of FFNN layers for the projection heads |
| cfg_gloss_projection_head.max_l2_norm_ratio ($\epsilon$) | 0.015 | Hyperparameter for the distance constraint integrated in the projection heads |
## Sense/context embeddings
* Directory: `data/bert_embeddings/`
* Sense embeddings: `bert-large-cased_WordNet_Gloss_Corpus.hdf5`
* Context embeddings for the self-training objective: `bert-large-cased_SemCor.hdf5`
* Context embeddings for evaluating the WSD task: `bert-large-cased_WSDEval-ALL.hdf5`
# Reference
```
@inproceedings{Mizuki:EACL2023,
title = "Semantic Specialization for Knowledge-based Word Sense Disambiguation",
author = "Mizuki, Sakae and Okazaki, Naoaki",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
series = {EACL},
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
pages = "3449--3462",
}
```
* [arXiv version](https://arxiv.org/abs/2304.11340) is also available.
|
SM16/TreeClassifier
|
SM16
| 2023-06-17T08:15:11Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-17T07:27:25Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: TreeClassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# TreeClassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Pepper Tree

#### Weeping Willow

|
musabg/mt5-xl-tr-summarization
|
musabg
| 2023-06-17T07:25:20Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"tr",
"dataset:musabg/wikipedia-tr-summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T16:24:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- musabg/wikipedia-tr-summarization
metrics:
- rouge
model-index:
- name: mt5-xl-tr-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: musabg/wikipedia-tr-summarization
type: musabg/wikipedia-tr-summarization
split: validation
metrics:
- name: Rouge1
type: rouge
value: 56.4468
language:
- tr
---
# mT5-Xl Turkish Summarization
This model is a fine-tuned version of [google/mt5-xl](https://huggingface.co/google/mt5-xl) on the musabg/wikipedia-tr-summarization dataset.
This can be used with HF summarization pipeline.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Eval results
It achieves the following results on the evaluation set:
- Loss: 0.5676
- Rouge1: 56.4468
- Rouge2: 41.3258
- Rougel: 48.1909
- Rougelsum: 48.4284
- Gen Len: 75.9265
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Irgendsoeine/FaceTheVote3
|
Irgendsoeine
| 2023-06-17T07:10:58Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-17T06:56:45Z |
---
pipeline_tag: image-classification
---
|
tux/dqn-SpaceInvadersNoFrameskip-v4
|
tux
| 2023-06-17T07:09:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T09:52:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.00 +/- 186.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tux -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tux -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tux
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Csuarezg/SBERTA-finetuned
|
Csuarezg
| 2023-06-17T07:04:30Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"es",
"dataset:xnli",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-12T21:04:08Z |
---
datasets:
- xnli
language:
- es
library_name: transformers
---
|
kjiwon1222/my_awesome_eli5_clm-model
|
kjiwon1222
| 2023-06-17T06:54:34Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T06:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8621 | 1.0 | 1137 | 3.7690 |
| 3.7782 | 2.0 | 2274 | 3.7533 |
| 3.7245 | 3.0 | 3411 | 3.7506 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Arindam75/Reinforce-pixelcopter-v1
|
Arindam75
| 2023-06-17T06:22:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T06:21:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.90 +/- 13.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sunflowermarshmallows/dqn-SpaceInvadersNoFrameskip-v4
|
sunflowermarshmallows
| 2023-06-17T05:25:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T05:24:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 629.00 +/- 184.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sunflowermarshmallows
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nolanaatama/skrmkhllvjprvc500pchsmgzb
|
nolanaatama
| 2023-06-17T05:02:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T04:59:21Z |
---
license: creativeml-openrail-m
---
|
eason0203/Reinforce-cartpole
|
eason0203
| 2023-06-17T04:34:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T04:34:18Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 418.80 +/- 129.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nolanaatama/rngrndrvcv800pchsrthysttylrsvrsn
|
nolanaatama
| 2023-06-17T04:33:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-31T04:13:59Z |
---
license: creativeml-openrail-m
---
|
ALPHONSE28/EQUIPO06SEMANA09
|
ALPHONSE28
| 2023-06-17T04:33:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T06:38:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: EQUIPO06SEMANA09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EQUIPO06SEMANA09
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9233
- F1: 0.9514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-17T04:00:39Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T22:42:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0194
- Accuracy: 0.9979
- F1: 0.9778
- Precision: 0.9987
- Recall: 0.9578
- Roc Auc Score: 0.9789
- Tpr At Fpr 0.01: 0.9642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0035 | 1.0 | 56250 | 0.0126 | 0.9975 | 0.9736 | 0.9917 | 0.9562 | 0.9779 | 0.9052 |
| 0.002 | 2.0 | 112500 | 0.0159 | 0.9977 | 0.9755 | 0.9975 | 0.9544 | 0.9771 | 0.9466 |
| 0.0008 | 3.0 | 168750 | 0.0136 | 0.9981 | 0.9793 | 0.9977 | 0.9616 | 0.9807 | 0.958 |
| 0.0 | 4.0 | 225000 | 0.0235 | 0.9973 | 0.9708 | 0.9992 | 0.944 | 0.9720 | 0.9574 |
| 0.0004 | 5.0 | 281250 | 0.0194 | 0.9979 | 0.9778 | 0.9987 | 0.9578 | 0.9789 | 0.9642 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/ShowmakerMix_v1
|
digiplay
| 2023-06-17T03:05:06Z | 310 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-13T01:35:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/16032/showmakermix
Original Author's DEMO image:

|
DreamerGPT/D7b-5-1
|
DreamerGPT
| 2023-06-17T01:38:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T01:20:31Z |
---
license: apache-2.0
---
# D7b-5-1
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
mskani/controlnet-hands
|
mskani
| 2023-06-17T01:35:09Z | 0 | 5 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:10:39Z |
---
license: creativeml-openrail-m
---
|
zhangjian94cn/Taxi-v3
|
zhangjian94cn
| 2023-06-17T01:33:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T01:33:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhangjian94cn/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
darshan7/Model_xlnet_results
|
darshan7
| 2023-06-17T01:22:18Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"xlnet",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-14T19:04:11Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: darshan7/Model_xlnet_results
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# darshan7/Model_xlnet_results
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0058
- Validation Loss: 0.0110
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 181655, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0392 | 0.0262 | 0 |
| 0.0211 | 0.0185 | 1 |
| 0.0151 | 0.0161 | 2 |
| 0.0110 | 0.0127 | 3 |
| 0.0074 | 0.0110 | 4 |
| 0.0058 | 0.0110 | 5 |
| 0.0058 | 0.0110 | 6 |
| 0.0058 | 0.0110 | 7 |
| 0.0059 | 0.0110 | 8 |
| 0.0058 | 0.0110 | 9 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DreamerGPT/D13b-3-3
|
DreamerGPT
| 2023-06-17T01:21:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T00:58:23Z |
---
license: apache-2.0
---
# D13b-3-3
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
sheshenin/vvshsh
|
sheshenin
| 2023-06-17T00:40:05Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T00:35:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VikaSH Dreambooth model trained by sheshenin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Ioanaaaaaaa/bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
|
Ioanaaaaaaa
| 2023-06-16T23:47:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T23:30:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.941
- name: F1
type: f1
value: 0.9411169346964399
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2591
- Accuracy: 0.941
- F1: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0799 | 1.0 | 250 | 0.1898 | 0.9375 | 0.9377 |
| 0.0516 | 2.0 | 500 | 0.2290 | 0.938 | 0.9383 |
| 0.0386 | 3.0 | 750 | 0.2107 | 0.9415 | 0.9419 |
| 0.0195 | 4.0 | 1000 | 0.2607 | 0.9435 | 0.9433 |
| 0.0149 | 5.0 | 1250 | 0.2591 | 0.941 | 0.9411 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheBloke/robin-33B-v2-GGML
|
TheBloke
| 2023-06-16T23:31:16Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:09:39Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 33B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 33B v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-33B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| robin-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-33b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 33B v2
No model card provided in source repository.
|
ghze/Taxi_v3
|
ghze
| 2023-06-16T23:00:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T23:00:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ghze/Taxi
|
ghze
| 2023-06-16T22:59:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T22:59:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sam34738/indicbert
|
sam34738
| 2023-06-16T22:03:57Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T21:56:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: indicbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indicbert
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9751
- Accuracy: 0.6689
- F1: 0.6899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7041 | 1.0 | 2100 | 0.7416 | 0.6589 | 0.6710 |
| 0.8083 | 2.0 | 4200 | 0.9751 | 0.6689 | 0.6899 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Enterprize1/ppo-LunarLander-v2
|
Enterprize1
| 2023-06-16T21:45:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T21:45:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.78 +/- 66.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yinxiaoz/bert-finetuned-ner
|
yinxiaoz
| 2023-06-16T21:37:53Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T05:15:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9326065411298315
- name: Recall
type: recall
value: 0.9501851228542578
- name: F1
type: f1
value: 0.9413137712570858
- name: Accuracy
type: accuracy
value: 0.9867104256195914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0600
- Precision: 0.9326
- Recall: 0.9502
- F1: 0.9413
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0884 | 1.0 | 1756 | 0.0675 | 0.9186 | 0.9339 | 0.9261 | 0.9822 |
| 0.0345 | 2.0 | 3512 | 0.0611 | 0.9291 | 0.9485 | 0.9387 | 0.9862 |
| 0.0182 | 3.0 | 5268 | 0.0600 | 0.9326 | 0.9502 | 0.9413 | 0.9867 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
stanford-crfm/music-small-ar-800k
|
stanford-crfm
| 2023-06-16T21:28:12Z | 183 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:01:12Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-800k
|
stanford-crfm
| 2023-06-16T21:27:08Z | 664 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:54:35Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-800k
|
stanford-crfm
| 2023-06-16T21:25:52Z | 572 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:17:20Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-100k
|
stanford-crfm
| 2023-06-16T21:24:54Z | 176 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:08:04Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-large-100k
|
stanford-crfm
| 2023-06-16T21:24:11Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:22:37Z |
---
license: apache-2.0
---
This is a Large (780M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
Schnitzl/detr-resnet-50_finetuned_cppe5
|
Schnitzl
| 2023-06-16T20:54:42Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-16T17:17:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/bsrnn-vocals
|
crlandsc
| 2023-06-16T20:25:39Z | 0 | 2 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"vocals",
"region:us"
] | null | 2023-06-16T20:18:04Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- vocals
---
# Model Card for bsrnn-vocals
Vocals model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
GEMCorp/q-Taxi-v3
|
GEMCorp
| 2023-06-16T20:19:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T20:08:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GEMCorp/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sngsfydy/resnet-50-finetuned-eurosat
|
sngsfydy
| 2023-06-16T20:17:05Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T19:14:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0706
- Accuracy: 0.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6069 | 0.99 | 20 | 1.5839 | 0.3879 |
| 1.5395 | 1.98 | 40 | 1.4860 | 0.5485 |
| 1.4321 | 2.96 | 60 | 1.3500 | 0.5364 |
| 1.3292 | 4.0 | 81 | 1.1826 | 0.5212 |
| 1.233 | 4.99 | 101 | 1.0706 | 0.5152 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheBloke/robin-13B-v2-GGML
|
TheBloke
| 2023-06-16T20:13:21Z | 0 | 6 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:59:47Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 13B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 13B v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-13B-v2-fp16)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| robin-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 13B v2
No model card provided in source repository.
|
FALLENSTAR/MitsubishiChariotLoRa
|
FALLENSTAR
| 2023-06-16T20:10:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-09T22:52:05Z |
### Model Description
That LoRa based on Mitsubishi Chariot/Chariot grandis 1997-2003. It's also a test model that poorly configured, so you have to play with the settings...
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1

















|
FALLENSTAR/CedricGloriaLoRa
|
FALLENSTAR
| 2023-06-16T20:10:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-09T20:58:56Z |
### Model Description
First of all, it's LoRa. It is based on my favorite Nissan Cedric/Gloria Y31 Hardtop from the years '87-91. It is a test model, so it has defects. I don't remember how many samples and epochs were used in it... But, with some of the checkpoints it turns out very similar and funny.
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1
### Results













|
FALLENSTAR/TurbofansLoRa
|
FALLENSTAR
| 2023-06-16T20:09:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T01:32:23Z |
### Model Description
This LoRa is based on Turbofan or Aero Covers, an invention from Japan. Turbofan were created to effectively cool the brake discs. Originally they were used in motorsports, and were made out of aluminum.
Now, thanks to new brake technology, Turbofans are not used for their original purpose. And they are not popular in professional motorsports.
But, to me, they add a futuristic style to car tuning.
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1




|
TheBloke/robin-33B-v2-fp16
|
TheBloke
| 2023-06-16T20:07:31Z | 1,566 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-16T18:09:39Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 33B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 33B v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-33B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 33B v2
No model card provided in source repository.
|
TheBloke/robin-7B-v2-GGML
|
TheBloke
| 2023-06-16T20:04:09Z | 0 | 8 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:28:00Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| robin-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 7B v2
No model card provided in source repository.
|
GEMCorp/q-FrozenLake-v1-4x4-noSlippery
|
GEMCorp
| 2023-06-16T19:51:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:51:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GEMCorp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ChristineCheng/my_awesome_eli5_clm-model
|
ChristineCheng
| 2023-06-16T19:49:19Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:33:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ChristineCheng/my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChristineCheng/my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7347
- Validation Loss: 3.7399
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9119 | 3.7667 | 0 |
| 3.7942 | 3.7493 | 1 |
| 3.7347 | 3.7399 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
CodyKilpatrick/Reinforce-Pixelcopter-PLE-v0
|
CodyKilpatrick
| 2023-06-16T19:43:03Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-12T15:12:47Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 98.70 +/- 89.31
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
karina-aquino/spanish-sentiment-model
|
karina-aquino
| 2023-06-16T19:41:41Z | 36 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T21:51:39Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: spanish-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-sentiment-model
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0046
- Accuracy: 0.65
- F1: 0.6646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 375 | 1.0046 | 0.65 | 0.6646 |
| 1.2137 | 2.0 | 750 | 1.0212 | 0.61 | 0.6398 |
| 0.9497 | 3.0 | 1125 | 1.0247 | 0.6133 | 0.6478 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ananay/kneearch
|
ananay
| 2023-06-16T19:17:59Z | 22 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-16T19:05:11Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kneearch Dreambooth model trained by ananay with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-16T19:17:42Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T15:49:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Accuracy: 0.9978
- F1: 0.9767
- Precision: 0.9994
- Recall: 0.955
- Roc Auc Score: 0.9775
- Tpr At Fpr 0.01: 0.9632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0057 | 1.0 | 35625 | 0.0113 | 0.9979 | 0.9779 | 0.9954 | 0.961 | 0.9804 | 0.9518 |
| 0.0035 | 2.0 | 71250 | 0.0150 | 0.9975 | 0.9726 | 0.9983 | 0.9482 | 0.9741 | 0.95 |
| 0.0011 | 3.0 | 106875 | 0.0175 | 0.9975 | 0.9727 | 0.9994 | 0.9474 | 0.9737 | 0.9554 |
| 0.0009 | 4.0 | 142500 | 0.0160 | 0.9979 | 0.9778 | 0.9990 | 0.9576 | 0.9788 | 0.9618 |
| 0.0 | 5.0 | 178125 | 0.0192 | 0.9978 | 0.9767 | 0.9994 | 0.955 | 0.9775 | 0.9632 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.