modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 18:30:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Leukschrauber/Taxi-v3
|
Leukschrauber
| 2023-06-22T10:05:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:05:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Leukschrauber/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ammaradel/PSU-LLaMA-Inference
|
ammaradel
| 2023-06-22T09:59:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"question-answering",
"en",
"region:us"
] |
question-answering
| 2023-06-15T07:55:31Z |
---
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
pipeline_tag: question-answering
language:
- en
library_name: adapter-transformers
---
LLaMA model fine-tuned on PSU Dataset 3 (8K lines)
|
Leukschrauber/q-FrozenLake-v1-4x4-noSlippery
|
Leukschrauber
| 2023-06-22T09:57:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:57:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Leukschrauber/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2_left_out_switchboard
|
NasimB
| 2023-06-22T09:50:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T19:41:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_switchboard
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_switchboard
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.983 | 0.24 | 500 | 5.0786 |
| 4.7603 | 0.48 | 1000 | 4.6865 |
| 4.4521 | 0.73 | 1500 | 4.4635 |
| 4.2512 | 0.97 | 2000 | 4.3124 |
| 4.0458 | 1.21 | 2500 | 4.2272 |
| 3.9687 | 1.45 | 3000 | 4.1443 |
| 3.9024 | 1.69 | 3500 | 4.0705 |
| 3.8439 | 1.93 | 4000 | 4.0057 |
| 3.6791 | 2.18 | 4500 | 3.9845 |
| 3.6259 | 2.42 | 5000 | 3.9471 |
| 3.6137 | 2.66 | 5500 | 3.9057 |
| 3.592 | 2.9 | 6000 | 3.8654 |
| 3.4438 | 3.14 | 6500 | 3.8758 |
| 3.3844 | 3.38 | 7000 | 3.8570 |
| 3.3977 | 3.63 | 7500 | 3.8324 |
| 3.4015 | 3.87 | 8000 | 3.8053 |
| 3.2638 | 4.11 | 8500 | 3.8300 |
| 3.1771 | 4.35 | 9000 | 3.8250 |
| 3.1914 | 4.59 | 9500 | 3.8070 |
| 3.1993 | 4.84 | 10000 | 3.7853 |
| 3.1089 | 5.08 | 10500 | 3.8146 |
| 2.9539 | 5.32 | 11000 | 3.8262 |
| 2.9853 | 5.56 | 11500 | 3.8173 |
| 2.9984 | 5.8 | 12000 | 3.8020 |
| 2.9462 | 6.04 | 12500 | 3.8259 |
| 2.7343 | 6.29 | 13000 | 3.8527 |
| 2.7724 | 6.53 | 13500 | 3.8499 |
| 2.7817 | 6.77 | 14000 | 3.8423 |
| 2.7789 | 7.01 | 14500 | 3.8510 |
| 2.5477 | 7.25 | 15000 | 3.8873 |
| 2.5643 | 7.5 | 15500 | 3.8904 |
| 2.5842 | 7.74 | 16000 | 3.8896 |
| 2.5913 | 7.98 | 16500 | 3.8858 |
| 2.4293 | 8.22 | 17000 | 3.9177 |
| 2.4253 | 8.46 | 17500 | 3.9231 |
| 2.4274 | 8.7 | 18000 | 3.9240 |
| 2.4331 | 8.95 | 18500 | 3.9254 |
| 2.362 | 9.19 | 19000 | 3.9346 |
| 2.3519 | 9.43 | 19500 | 3.9373 |
| 2.3498 | 9.67 | 20000 | 3.9378 |
| 2.3461 | 9.91 | 20500 | 3.9378 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nolanspecter/ppo-Huggy
|
nolanspecter
| 2023-06-22T09:44:55Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:44:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nolanspecter/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
user1251/soccer_finetuned_model_final5
|
user1251
| 2023-06-22T09:33:06Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:28:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model_final5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model_final5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 189 | 3.9536 |
| No log | 2.0 | 378 | 3.9239 |
| 3.7068 | 3.0 | 567 | 3.9197 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dhifanrazaqa/t5-end2end-questions-generation
|
dhifanrazaqa
| 2023-06-22T09:19:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T06:51:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [muchad/idt5-base](https://huggingface.co/muchad/idt5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3156 | 0.34 | 100 | 2.2625 |
| 2.5509 | 0.67 | 200 | 2.0394 |
| 2.3619 | 1.01 | 300 | 1.9596 |
| 2.2501 | 1.34 | 400 | 1.9272 |
| 2.2 | 1.68 | 500 | 1.9074 |
| 2.1682 | 2.02 | 600 | 1.8882 |
| 2.1222 | 2.35 | 700 | 1.8893 |
| 2.0874 | 2.69 | 800 | 1.8722 |
| 2.0751 | 3.03 | 900 | 1.8656 |
| 2.0501 | 3.36 | 1000 | 1.8506 |
| 2.0338 | 3.7 | 1100 | 1.8491 |
| 2.0326 | 4.03 | 1200 | 1.8428 |
| 1.9992 | 4.37 | 1300 | 1.8445 |
| 2.0081 | 4.71 | 1400 | 1.8449 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
slimsha2dy/my_awesome_model
|
slimsha2dy
| 2023-06-22T09:18:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:07:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1855
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2413 | 1.0 | 1000 | 0.2020 | 0.922 |
| 0.1451 | 2.0 | 2000 | 0.1855 | 0.925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chansung/gpt4-alpaca-lora-13b-decapoda-1024
|
chansung
| 2023-06-22T09:16:45Z | 0 | 4 | null |
[
"safetensors",
"alpaca",
"llama",
"chat",
"gpt4",
"text2text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-04-17T01:27:23Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
- gpt4
---
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system. In this fine-tuning process, the length of tokens is set to `1024` since GPT tends to generates long texts
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
--base_model='decapoda-research/llama-13b-hf' \
--data_path='alpaca_data_gpt4.json' \
--num_epochs=10 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./gpt4-alpaca-lora-13b' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--batch_size=... \
--micro_batch_size=...
```
You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/s5xxpdal?workspace=user-chansung18).
## Example output
1. Example #1
```
Instruction: List all Canadian provinces in alphabetical order.
Response: List of Canadian provinces in alphabetical order:
Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Northwest Territories, Nova Scotia, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan.
Instruction: Which ones are on the east side?
Response: Eastern provinces of Canada are: Newfoundland and Labrador, Nova Scotia, Prince Edward Island, New Brunswick.
Instruction: What foods are famous in each province on the east side?
Response: Newfoundland and Labrador: Jiggs Dinner, Screech, Partridgeberry Wine, Flipper Pie, Nanny’s Cookies. Nova Scotia: Donair, Pizza Burger, Apple Pie, Halifax Donair. New Brunswick:
Fried Dough (Malpeque Doughnuts), Poutine, Acadian Foods (Cote-de-beouf, potato chips, cabbage rolls, pickled beets). Prince Edward Island: Coleslaw, Lobster, Potato Bread, Blueberry Pies, Butter Tarts.
Instruction: What about sightseeing? or landmarks? list one per province
Response: Newfoundland and Labrador: Cabot Tower (St. John’s), Terra Nova National Park, Gros Morne National Park, Signal Hill. Nova Scotia: Cabot Trail, Peggy’s Cove, Lighthouses, Halifax Waterfront. New Brunswick: Fundy National Park, Bay of Fundy Tides,
Loyalist House, Hopewell Rocks. Prince Edward Island: Green Gables Shore, Cavendish National Park, Anne of Green Gables House.
```
2. Example #2
```
Instruction: Tell me about alpacas.
Response: Alpacas are a type of llama that live in the Andes mountains in South America. They are native to Peru, Chile, Bolivia, and Ecuador, where they are valued for their fiber. Alpacas are related to camels and they are cousins of the llama. Alpacas have woolly fur that comes in many colors and they are very gentle creatures. They are considered the smaller cousins of the llama and they are about the size of a medium dog. Alpacas have long necks and long, lush fur around their necks, which is known as their “ruff.” They also have long lashes that help keep dust and debris from their eyes. Alpacas have hooves that help them move easily through the rocky terrain of the Andes mountains. They make a soft “humbing” sound when they are content. Alpacas live in herds, and females are called “huacas,” and males are called “chancos.” Alpacas can live for up to 20 years. Alpacas can come in a variety of colors, including white, black, brown,
Instruction: What other animals are living in the same area?
Response: Alpacas live in the Andes mountains, and in that region there are many other animals. There are condors, vicunas, pumas, rheas, foxes, chinchillas, and more.
Instruction: Are they the same species?
Response: Alpacas and llamas are both members of the camelid family, but there are differences. Alpacas have longer necks, smaller ears, and shorter, thicker wool. Llamas are larger and their wool is coarser.
```
3. Example #3
```
Instruction: Tell me about the king of France in 2019.
Response: In 2019, King of France is not applicable, as France does not have a monarch. It has been a republic since the 1790s and its constitution prohibits the monarchy from being reinstated. The President of France is currently Emmanuel Macron.
Instruction: What about before him?
Response: Before Emmanuel Macron, there were 5 other presidents: Nicolas Sarkozy, Jacques Chirac, François Mitterrand, Valéry Giscard d’Estaing, and Georges Pompidou.
```
4. Example #4
```
Instruction: Write a Python program that prints the first 10 Fibonacci numbers.
Response:
def fibonacci_numbers():
a, b = 0, 1
for i in range(10):
a, b = b, a + b
print(a, b)
fibonacci_numbers()
Output: 0 1 1 2 3 5 8 13 21 34
Instruction: Could you explain how the code works?
Response: First, the function fibonacci_numbers() is defined. The range function creates a range of numbers, in this case between 0 and 10.
Inside the loop, we initialize two variables a and b to 0 and 1 respectively. a, b represent the previous and current Fibonacci numbers respectively. For each number in the range, we add the previous Fibonacci number b to the current one a.
At the end of the loop, the a and b variables will hold the Fibonacci numbers from 0 to 10. This is then printed on the screen.
Instruction: What is recursion?
Response: Recursion is a way of defining a method in which the method calls itself until a certain condition is met. In Python, recursion is used when there is a task that can be broken into smaller tasks that can be done recursively. A recursive function can be defined as any function that calls itself in the body of the function.
```
|
rudzhehdehd/Love_Letter
|
rudzhehdehd
| 2023-06-22T09:13:17Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:07:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: Love_Letter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Love_Letter
This model is a fine-tuned version of [EasthShin/BTS_Lyrics_GPT-Neo-base](https://huggingface.co/EasthShin/BTS_Lyrics_GPT-Neo-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 1.2588 |
| No log | 2.0 | 400 | 1.1366 |
| 1.3097 | 3.0 | 600 | 1.1046 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rudzhehdehd/To_my_Love
|
rudzhehdehd
| 2023-06-22T08:40:50Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T06:42:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: To_my_Love
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# To_my_Love
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2757 | 1.0 | 860 | 1.8783 |
| 1.8982 | 2.0 | 1720 | 1.7536 |
| 1.8221 | 3.0 | 2580 | 1.7184 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
97jmlr/ddpm-celebahq-finetuned-butterflies-2epochs
|
97jmlr
| 2023-06-22T08:00:12Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-06-22T07:59:24Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('97jmlr/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
dipesh1111/Redpajama-7b-chat-lora-wiseyak
|
dipesh1111
| 2023-06-22T07:54:57Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T07:54:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
laampt/gpt-j-6B-sft-lora-ngoanlm-merged
|
laampt
| 2023-06-22T07:54:57Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"news",
"vietai",
"qa",
"lora",
"vi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T07:20:17Z |
---
license: apache-2.0
language:
- vi
- en
metrics:
- accuracy
- rouge
library_name: transformers
tags:
- news
- vietai
- qa
- lora
---
# GPT-J-6B SFT with LoRA
### How it works
- Base model is `VietAI/gpt-j-6B-vietnamese-news`
- LoRA SFT from ngoanlm
- Merged weight
### How to use
```python
import torch
import textwrap
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
EVAL_PROMPTS = [
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Messi đã đạt bao nhiêu quả bóng vàng? \n\n### Response: ",
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Thủ đô nào đông dân nhất châu Á? \n\n### Response: ",
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Quốc gia nào có đường biển dài nhất? \n\n### Response: ",
]
def generate_eval(model: AutoModelForCausalLM, tokenizer: AutoTokenizer):
print("Starting Evaluation...")
model = model.to(device)
model.eval()
for eval_prompt in EVAL_PROMPTS:
batch = tokenizer(eval_prompt, return_tensors="pt").to(device)
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=128)
print("\n\n", textwrap.fill(tokenizer.decode(output_tokens[0], skip_special_tokens=False)))
print("*"*100)
# Load the Lora model
model = AutoModelForCausalLM.from_pretrained(f"laampt/gpt-j-6B-sft-lora-ngoanlm-merged")
tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-j-6B-vietnamese-news")
generate_eval(model, tokenizer)
```
The output should be:
``` bash
Starting Evaluation...
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây. ###
Instruction: Messi đã đạt bao nhiêu quả bóng vàng? ### Response:
Messi đã giành được tổng cộng 5 Quả bóng vàng.<|endoftext|>
****************************************************************************************************
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây. ###
Instruction: Thủ đô nào đông dân nhất châu Á? ### Response: Thủ đô
đông dân nhất châu Á là Bắc Kinh, Trung Quốc.<|endoftext|>
****************************************************************************************************
Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây. ###
Instruction: Quốc gia nào có đường biển dài nhất? ### Response:
Quốc gia có đường biển dài nhất là Nhật Bản với chiều dài khoảng
38.000 dặm (60.000 km).<|endoftext|>
****************************************************************************************************
```
|
eliztmq/ppo-Huggy
|
eliztmq
| 2023-06-22T07:50:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T07:50:33Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eliztmq/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/crgtckrsthprkrvc500pchsxcsdr
|
nolanaatama
| 2023-06-22T07:47:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T07:45:31Z |
---
license: creativeml-openrail-m
---
|
KORguy/shirt
|
KORguy
| 2023-06-22T07:26:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T06:42:55Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks T-shirt
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - KORguy/shirt
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks T-shirt using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
mache102/ppo-LunarLander-v2
|
mache102
| 2023-06-22T07:11:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T07:11:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.62 +/- 15.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
laampt/gpt-j-6B-sft-lora-ngoanlm
|
laampt
| 2023-06-22T07:07:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T07:07:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
vnykr/rl_course_vizdoom_health_gathering_supreme-v3
|
vnykr
| 2023-06-22T07:03:53Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T07:03:44Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.95 +/- 4.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r vnykr/rl_course_vizdoom_health_gathering_supreme-v3
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v3
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v3 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Tiru8055/a2c-PandaReachDense-v2
|
Tiru8055
| 2023-06-22T06:51:10Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T06:48:26Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.39 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YoneShiro/CartPole-v1
|
YoneShiro
| 2023-06-22T06:48:22Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T13:12:58Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
curiousily/falcon-7b-qlora-chat-support-bot-faq-merged
|
curiousily
| 2023-06-22T06:45:33Z | 14 | 4 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"arxiv:2305.14314",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T16:52:29Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
A [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) model fine-tuned with [QLoRA](https://arxiv.org/abs/2305.14314) on a custom e-commerce chatbot dataset.
- Fine-tuning Falcon 7B with QLoRA tutorial
- [Text tutorial on MLExpert.io](https://www.mlexpert.io/prompt-engineering/fine-tuning-llm-on-custom-dataset-with-qlora)
- [YouTube tutorial](https://www.youtube.com/watch?v=DcBC4yGHV4Q)
- Deploy LLM on Inference Endpoints tutorial
- [Text tutorial on MLExpert.io](https://www.mlexpert.io/prompt-engineering/deploy-llm-to-production)
- [YouTube tutorial](https://www.youtube.com/watch?v=HI3cYN0c9ZU)
|
tibok/baichuan-7B-chatml
|
tibok
| 2023-06-22T06:35:56Z | 16 | 3 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"text-generation-inference",
"custom_code",
"en",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T13:14:27Z |
---
license: apache-2.0
datasets:
- BelleGroup/train_0.5M_CN
language:
- en
- zh
tags:
- text-generation-inference
widget:
- text: |-
<|im_start|>user
请以『春天的北京』为题写一首诗歌
<|im_end|>
<|im_start|>assistant
example_title: generation zh
---
# Baichuan 7B ChatML
## 介绍 Introduction
`baichuan-7B-chatml` 是支持多轮对话兼容于 ChatML 的模型。
模型基于 [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) 微调而成。
`baichuan-7B-chatml` 模型支持商用。但按照baichuan-7B的要求,如果将baichuan-7B衍生品用作商业用途,需要联系[baichuan-7B 的许可方](https://github.com/baichuan-inc/baichuan-7B#%E5%8D%8F%E8%AE%AE)。
__需要注意:在面对事实性知识任务时,模型可能会生成不正确的信息或者产生不稳定的输出(有时可以返回正确答案,有时不能)。__
`baichuan-7B-chatml` is a model that supports multi-turn dialog and is compatible with ChatML.
The model is fine-tuned based on [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B).
`baichuan-7B-chatml` model supports commercial use. However, according to the requirements of baichuan-7B, if baichuan-7B derivatives are used for commercial purposes, you need to contact [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B#%E5%8D%8F%E8%AE%AE)。
__Note: When dealing with factual knowledge tasks, it may generate incorrect information or unstable output (sometimes it can return the correct answer, sometimes not).__
## 代码示例 Examples
模型在百川的基础上提供了对轮对话的函数供调用。
The model provides a function for multi-turn dialogs.
```ipython
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("tibok/baichuan-7B-chatml", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("tibok/baichuan-7B-chatml", device_map="auto", trust_remote_code=True)
>>> response, history = model.chat(tokenizer, "请以『春天的北京』为题写一首诗歌", history=[])
春天的北京,
花开万丈,
春意盎然,
清风送暖。
<|im_end|>
>>> response, history = model.chat(tokenizer, "能不能再写一首关于香山的?", history=history)
>>> print(response)
香山之巅,
芳草连天。
清泉潺潺,
山峦绵绵。
<|im_end|>
```
## 更多细节 Details
- Dataset: [BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
- steps: 13800
- batch_size: 8
- seq_len: 2048
|
KORguy/textual_inversion_shirt
|
KORguy
| 2023-06-22T06:28:48Z | 22 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T05:27:20Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - KORguy/textual_inversion_shirt
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
AkitoP/Anime-yolov8-seg
|
AkitoP
| 2023-06-22T06:24:32Z | 0 | 3 | null |
[
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-06-16T12:09:48Z |
---
license: cc-by-sa-3.0
---
Yolov8m-seg model for Anime character segmentation.
|
evatan/path_to_saved_model
|
evatan
| 2023-06-22T06:24:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T06:13:44Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/path_to_saved_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
getrajeev03/test-huggingface-ibm
|
getrajeev03
| 2023-06-22T06:22:28Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-21T14:25:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: test-huggingface-ibm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 46.7991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-huggingface-ibm
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3861
- Rouge1: 46.7991
- Rouge2: 23.253
- Rougel: 39.2606
- Rougelsum: 43.0759
- Gen Len: 17.1880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.4378 | 1.0 | 1842 | 1.3861 | 46.7991 | 23.253 | 39.2606 | 43.0759 | 17.1880 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.0
- Tokenizers 0.11.0
|
slkoix/ZORO
|
slkoix
| 2023-06-22T06:08:30Z | 0 | 0 |
nemo
|
[
"nemo",
"medical",
"token-classification",
"dataset:JosephusCheung/GuanacoDataset",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"region:us"
] |
token-classification
| 2023-06-22T06:07:40Z |
---
license: bigscience-openrail-m
datasets:
- JosephusCheung/GuanacoDataset
metrics:
- cer
library_name: nemo
pipeline_tag: token-classification
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chjooon/distilroberta_mask_16000_4000
|
chjooon
| 2023-06-22T06:04:41Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T05:21:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta_mask_16000_4000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta_mask_16000_4000
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7052 | 1.0 | 3172 | 0.5631 |
| 0.5438 | 2.0 | 6344 | 0.4294 |
| 0.4811 | 3.0 | 9516 | 0.3951 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
pongjin/roberta_with_kornli
|
pongjin
| 2023-06-22T05:53:14Z | 1,419 | 6 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"zero-shot-classification",
"ko",
"dataset:kor_nli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-06-22T02:39:02Z |
---
license: apache-2.0
datasets:
- kor_nli
language:
- ko
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
**This model has been referred to the following link : https://github.com/Huffon/klue-transformers-tutorial.git**
해당 모델은 위 깃허브를 참고하여 klue/roberta-base 모델을 kor_nli 의 mnli, xnli로 파인튜닝한 모델입니다.
| train_loss | val_loss | acc | epoch | batch | lr |
| --- | --- | --- | --- | --- | --- |
| 0.326 | 0.538 | 0.811 | 3 | 32 | 2e-5 |
RoBERTa와 같이 token_type_ids를 사용하지 않는 모델의 경우, zero-shot pipeline을 바로 적용할 수 없습니다(transformers==4.7.0 기준)
따라서 다음과 같이 변환하는 코드를 넣어줘야 합니다. 해당 코드 또한 위 깃허브의 코드를 수정하였습니다.
```python
class ArgumentHandler(ABC):
"""
Base interface for handling arguments for each :class:`~transformers.pipelines.Pipeline`.
"""
@abstractmethod
def __call__(self, *args, **kwargs):
raise NotImplementedError()
class CustomZeroShotClassificationArgumentHandler(ArgumentHandler):
"""
Handles arguments for zero-shot for text classification by turning each possible label into an NLI
premise/hypothesis pair.
"""
def _parse_labels(self, labels):
if isinstance(labels, str):
labels = [label.strip() for label in labels.split(",")]
return labels
def __call__(self, sequences, labels, hypothesis_template):
if len(labels) == 0 or len(sequences) == 0:
raise ValueError("You must include at least one label and at least one sequence.")
if hypothesis_template.format(labels[0]) == hypothesis_template:
raise ValueError(
(
'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
"Make sure the passed template includes formatting syntax such as {{}} where the label should go."
).format(hypothesis_template)
)
if isinstance(sequences, str):
sequences = [sequences]
labels = self._parse_labels(labels)
sequence_pairs = []
for label in labels:
# 수정부: 두 문장을 페어로 입력했을 때, `token_type_ids`가 자동으로 붙는 문제를 방지하기 위해 미리 두 문장을 `sep_token` 기준으로 이어주도록 함
sequence_pairs.append(f"{sequences} {tokenizer.sep_token} {hypothesis_template.format(label)}")
return sequence_pairs, sequences
```
이후 classifier를 정의할 때 이를 적용해야 됩니다.
```python
classifier = pipeline(
"zero-shot-classification",
args_parser=CustomZeroShotClassificationArgumentHandler(),
model="pongjin/roberta_with_kornli"
)
```
#### results
```python
sequence = "배당락 D-1 코스피, 2330선 상승세...외인·기관 사자"
candidate_labels =["외환",'환율', "경제", "금융", "부동산","주식"]
classifier(
sequence,
candidate_labels,
hypothesis_template='이는 {}에 관한 것이다.',
)
>>{'sequence': '배당락 D-1 코스피, 2330선 상승세...외인·기관 사자',
'labels': ['주식', '금융', '경제', '외환', '환율', '부동산'],
'scores': [0.5052872896194458,
0.17972524464130402,
0.13852974772453308,
0.09460823982954025,
0.042949128895998,
0.038900360465049744]}
```
|
lin666666/ppo-LunarLander-v2
|
lin666666
| 2023-06-22T05:41:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T05:40:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.11 +/- 19.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Retrytoo/Rai
|
Retrytoo
| 2023-06-22T05:31:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T05:31:38Z |
---
license: creativeml-openrail-m
---
|
user1251/football_model
|
user1251
| 2023-06-22T05:22:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T17:27:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: football_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# football_model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.6801 |
| No log | 2.0 | 2 | 1.6186 |
| No log | 3.0 | 3 | 1.5887 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dg845/diffusers-cd_cat256_lpips
|
dg845
| 2023-06-22T05:18:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | 2023-06-21T11:20:53Z |
---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_cat256_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the LSUN Cat 256x256 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd_cat256_lpips` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_cat256_lpips")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_cat256_lpips` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_cat256_lpips checkpoint.
model_id_or_path = "dg845/diffusers-cd_cat256_lpips"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_cat256_lpips_onestep_sample.png")
# Multistep sampling
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83
image = pipe(num_inference_steps=None, timesteps=[17, 0]).images[0]
image.save("cd_cat256_lpips_multistep_sample.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** LSUN Cat 256x256
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
Imxxn/RLCourseU1-LunarLander
|
Imxxn
| 2023-06-22T05:18:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T05:17:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.44 +/- 21.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dg845/diffusers-cd_cat256_l2
|
dg845
| 2023-06-22T05:16:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | 2023-06-21T10:52:06Z |
---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_cat256_l2.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the LSUN Cat 256x256 dataset, using the [L2 distance](https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd_cat256_l2` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_cat256_l2")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_cat256_l2` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_cat256_l2 checkpoint.
model_id_or_path = "dg845/diffusers-cd_cat256_l2"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_cat256_l2_onestep_sample.png")
# Multistep sampling
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L86
image = pipe(num_inference_steps=None, timesteps=[18, 0]).images[0]
image.save("cd_cat256_l2_multistep_sample.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** LSUN Cat 256x256
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
dg845/diffusers-cd_bedroom256_lpips
|
dg845
| 2023-06-22T05:11:52Z | 3 | 0 |
diffusers
|
[
"diffusers",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | 2023-06-21T11:11:03Z |
---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_bedroom256_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the LSUN Bedroom 256x256 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd_bedroom256_lpips` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_bedroom256_lpips")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_bedroom256_lpips` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_bedroom256_lpips checkpoint.
model_id_or_path = "dg845/diffusers-cd_bedroom256_lpips"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_bedroom256_lpips_onestep_sample.png")
# Multistep sampling
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83
image = pipe(num_inference_steps=None, timesteps=[17, 0]).images[0]
image.save("cd_bedroom256_lpips_multistep_sample.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** LSUN Bedroom 256x256
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
NanoIsTrash/dqn-SpaceInvadersNoFrameskip-v4
|
NanoIsTrash
| 2023-06-22T05:11:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T05:10:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 670.00 +/- 224.01
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NanoIsTrash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NanoIsTrash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NanoIsTrash
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
chjooon/distilroberta_mask_test
|
chjooon
| 2023-06-22T05:09:54Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T05:08:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta_mask_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta_mask_test
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Tiru8055/a2c-AntBulletEnv-v0
|
Tiru8055
| 2023-06-22T05:09:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T05:08:21Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1392.82 +/- 237.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dg845/diffusers-cd_bedroom256_l2
|
dg845
| 2023-06-22T05:02:17Z | 4 | 0 |
diffusers
|
[
"diffusers",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | 2023-06-21T10:08:40Z |
---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_bedroom256_l2.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the LSUN Bedroom 256x256 dataset, using the [L2 distance](https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd_bedroom256_l2` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_bedroom256_l2")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_bedroom256_l2` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_bedroom256_l2 checkpoint.
model_id_or_path = "dg845/diffusers-cd_bedroom256_l2"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_bedroom256_l2_onestep_sample.png")
# Multistep sampling
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L86
image = pipe(num_inference_steps=None, timesteps=[18, 0]).images[0]
image.save("cd_bedroom256_l2_multistep_sample.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** LSUN Bedroom 256x256
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
internetoftim/dino-vitb16-eurosat
|
internetoftim
| 2023-06-22T04:59:39Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-21T22:22:16Z |
# Fine-tuning Details
"facebook/dino-vitb16" # pre-trained model from which to fine-tune
"Graphcore/vit-base-ipu" # config specific to the IPU (Used POD4)
Using: [https://github.com/graphcore/Gradient-HuggingFace/tree/main/image-classification](https://github.com/graphcore/Gradient-HuggingFace/commit/826b72cba150be52e7420a3440a31e3096b73c78)
Run the notebook in Gradient, make sure to upload the .ipynb file from this repository:
[](https://ipu.dev/3YOs4Js)
Poplar SDK: v3.2.1
Dataset:
load a custom dataset from local/remote files or folders using the ImageFolder feature
option 1: local/remote files (supporting the following formats: tar, gzip, zip, xz, rar, zstd)
url = "https://madm.dfki.de/files/sentinel/EuroSAT.zip"
files = list(Path(dataset_dir).rglob("EuroSAT.zip"))
[](https://www.graphcore.ai/join-community)
|
ethan1278/personal-finetuned-wizard-vic-7b
|
ethan1278
| 2023-06-22T04:04:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T04:04:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
thesisabc/segformer-b0-finetuned-segments-sidewalk-2
|
thesisabc
| 2023-06-22T04:01:52Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-06-21T05:10:21Z |
---
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
chencjiajy/dqn-SpaceInvadersNoFrameskip-v4
|
chencjiajy
| 2023-06-22T03:57:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T03:56:49Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.00 +/- 159.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chencjiajy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chencjiajy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga chencjiajy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
xiao-ning/chatpig
|
xiao-ning
| 2023-06-22T03:21:40Z | 0 | 1 | null |
[
"text-generation-inference",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T03:09:25Z |
---
license: apache-2.0
language:
- zh
metrics:
- accuracy
tags:
- text-generation-inference
---
|
NanoIsTrash/Taxi-v3
|
NanoIsTrash
| 2023-06-22T02:53:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T02:53:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.81
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NanoIsTrash/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NanoIsTrash/q-FrozenLake-v1-4x4-noSlippery
|
NanoIsTrash
| 2023-06-22T02:50:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T02:50:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NanoIsTrash/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HELLOMRKINOBI/all-mpnet-base-v2
|
HELLOMRKINOBI
| 2023-06-22T02:47:54Z | 99 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"mpnet",
"fill-mask",
"sentence-similarity",
"region:us"
] |
sentence-similarity
| 2023-06-22T02:40:54Z |
---
library_name: transformers.js
pipeline_tag: sentence-similarity
---
https://huggingface.co/sentence-transformers/all-mpnet-base-v2 with ONNX weights to be compatible with Transformers.js.
|
zjunlp/zhixi-13b-diff
|
zjunlp
| 2023-06-22T02:43:59Z | 33 | 23 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"zh",
"arxiv:2302.13971",
"arxiv:2305.11527",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-23T15:43:17Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- code
---
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/logo_zhixi.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;"></a>
</p>
> This is the result of the weight difference between `Llama 13B` and `ZhiXi-13B`. You can click [here](https://github.com/zjunlp/KnowLM) to learn more.
# Knowledgable Large Language Model Framework.
With the rapid development of deep learning technology, large language models such as ChatGPT have made substantial strides in the realm of natural language processing. However, these expansive models still encounter several challenges in acquiring and comprehending knowledge, including the difficulty of updating knowledge and potential knowledge discrepancies and biases, collectively known as knowledge fallacies. The KnowLM project endeavors to tackle these issues by launching an open-source large-scale knowledgable language model framework and releasing corresponding models.
The project's `initial phase` introduced a knowledge extraction LLM based on LLaMA, dubbed **ZhiXi** (**智析**, which means intelligent analysis of data for knowledge extraction). To integrate the capacity of Chinese understanding into the language models without compromising their inherent knowledge, we firstly <b>(1) use Chinese corpora for the full-scale pre-training with LLaMA (13B), augment the language model's understanding of Chinese and improve its knowledge richness while retaining its original English and code capacities;</b> Then <b>(2) we fine-tune the model obtained from the first step with an instruction dataset, thus bolstering the language model's understanding of human instructions for knowledge extraction.</b>
- ❗Please note that this project is still undergoing optimization, and the model weights will be regularly updated to support new features and models!
**The features of this project are as follows:**
- Centered on knowledge and large models, a **full-scale pre-training** of the large model, such as LLaMA, is conducted using the built Chinese&English pre-training corpus.
- Based on the technology of **KG2Instructions**, the knowledge extraction tasks, including NER, RE, and IE, are optimized and can be completed using human instructions.
- Using the built Chinese instruction dataset (approximately 1400K), LoRA fine-tuning is used to enhance the model's understanding of human instructions.
- The weights of the pre-training model and LoRA's instruction fine-tuning are open-sourced.
- The **full-scale pre-training code** (providing conversion, construction, and loading of large corpora) and **LoRA instruction fine-tuning code** are open-sourced (support multi-machine multi-GPU).
All weights have been uploaded to HuggingFace🤗. It should be noted that all the following effects are based on `ZhiXi-13B-Diff`. If you have downloaded `ZhiXi-13B-Diff-fp16`, there may be some variations in the effects.
| Model Name | Train Method | Weight Type | Size | Download Link | Notes |
| -------------- | ------------ | --------------------- | -------- | ---------------------------------- | ------------------------------------------------------------ |
| ZhiXi-13B-Diff | Full Pretraining | Differential Weights | 48GB | [HuggingFace](https://huggingface.co/zjunlp/zhixi-13b-diff) <br/> [GoogleDrive](https://drive.google.com/drive/folders/1PZDqZNaBJYQYeON1-9aFBtagktEWAtUK?usp=drive_link)| Restoring the pre-trained weights (i.e. **ZhiXi-13B**) needs to match the weights of `LLaMA-13B`, please refer to [here](#2-2) for specific instructions. |
| ZhiXi-13B-Diff-fp16 | Full Pretraining | Differential Weights(fp16) | 24GB | [HuggingFace](https://huggingface.co/zjunlp/zhixi-13b-diff-fp16) <br/> [Google Drive](https://drive.google.com/drive/folders/1LYm-HUSSQ5Rl8nqZcswdiSpcP9xYTXaO?usp=sharing) | The main difference with `ZhiXi-13B-Diff` is the adoption of the `fp16` format for storage, which reduces memory usage. However, it may result in slight differences in the weights obtained from our actual training, which can slightly impact performance. For specific usage instructions, please refer to [here](#2-2) for specific instructions. |
| ZhiXi-13B-LoRA | LoRA Instruction-tuning | LoRA Weights | 251MB | [HuggingFace](https://huggingface.co/zjunlp/zhixi-13b-lora) <br/> [GoogleDrive](https://drive.google.com/drive/folders/1GLyaWIyDIayudrQhb_tJYoNPAUk1xByS?usp=drive_link) | It needs to be used with **ZhiXi-13B**. For specific instructions, please refer to [here](#2-4). |
| ZhiXi-7B Series | Coming soon | Coming soon | Coming soon | Coming soon | Coming soon |
## NEWS
- \[**June 2023**\] The project name has been changed from CaMA to KnowLM.
- \[**June 2023**\] Release the first version of pre-trained weights and the LoRA weights.
## Why it's called ZhiXi (智析)?
In Chinese, "Zhi" (智) signifies intelligence, referencing the AI's advanced language understanding capabilities. "Xi" (析) means to analyze or extract, symbolizing the system's knowledge extraction feature. Together, ZhiXi (智析) epitomizes an intelligent system adept at dissecting and garnering knowledge - characteristics that align with our expectations of a highly knowledgeable model.
## Contents
- [Cases](#1)
- [Pretraining Cases](#1-1)
- [Information Extraction Cases](#1-2)
- [General Ability Cases](#1-3)
- [Quick Start](#2)
- [Environment Configuration](#2-1)
- [Model Weight(Pretrain and LoRA)](#2-2)
- [Model Usage Guide](#2-4)
- [Information Extraction Prompt](#2-5)
- [Training Details](#3)
- [Pertraining data and Pretraining scripts](#3-1)
- [Instruction data and Instruction-tuning scripts](#3-3)
- [Limitations](#4)
- [TODO List](#5)
- [FAQ](#6)
- [Acknowledgments/Contributors/Citations](#7)
<h2 id="1">1. Cases</h2>
<h3 id="1-1">1.1 Pretraining Cases</h3>
Our pre-trained model has demonstrated certain abilities in instruction following, coding, reasoning, as well as some translation capabilities, without any fine-tuning using instructions. Additionally, it has acquired new knowledge. Below are some of our sample cases. If you wish to reproduce our examples and view detailed decoding configuration, please first [set up the environment](#2-1) and [restore the weights](#2-2), then follow the steps outlined [here](#2-4).
> In the follwing cases, text in **bold** represents the prompt, while non-bold text represents the model's output.
>
> Due to the maximum inference length set to 512, our cases fall into three situations:
>
> 1. Compeleted output. The model generates the termination token EOS and completes the output. We mark this with :white_check_mark:.
> 2. Incomplete output. The output is cut off due to the maximum inference length. We mark this with :eight_spoked_asterisk:.
> 3. Repeated output. We remove repeated content manually and mark it with :arrow_left:.
<details>
<summary><b>Translation</b></summary>
- <b>你很擅长将中文翻译成英文。将下面的句子翻译成英文:我们今天准备去西安看兵马俑。答案:</b>We are going to see the Terracotta Warriors in Xi'an today.:white_check_mark:
- **You are good at translating English into Chinese. Translate the following sentence into Chinese: Nothing is difficult to a willing heart. Answer:** 什么都不难,只要心甘情愿。:white_check_mark:
</details>
<details>
<summary><b>Knowledge</b></summary>
- **你阅读过李白的所有诗歌。李白的《将进酒》的原文是**:君不见黄河之水天上来,奔流到海不复回。 君不见高堂明镜悲白发,朝如青丝暮成雪。 人生得意须尽欢,莫使金樽空对月。 天生我材必有用,千金散尽还复来。 烹羊宰牛且为乐,会须一饮三百杯。 岑夫子,丹丘生,将进酒,君莫停。 与君歌一曲,请君为我侧耳听。 钟鼓馔玉何足贵,但愿长醉不复醒。 古来圣贤皆寂寞,惟有饮者留其名。 陈王昔时宴平乐,斗酒十千恣欢谑。 主人何为言少钱,径须沽取对君酌。 五花马,千金裘,呼儿将出换美酒,与尔同销万古愁。 唐代大诗人李白(701-762),字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水),隋末其先人流寓西域碎叶(唐时属安西都护府,在今吉尔吉斯斯坦北部托克马克附近),李白即生于碎叶。 幼时随父迁居绵州昌隆(今四川江油)青莲乡:eight_spoked_asterisk:
- **床前明月光,疑是地上霜。** 举头望明月,低头思故乡。 这首诗出自唐代诗人李白的《静夜思》。 作品原文 床前明月光,疑是地上霜。 举头望明月,低头思故乡。 [1] 作者简介 李白(701~762)字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水附近),先世于隋末流徙西域,李白即生于中亚碎叶(今巴尔喀什湖南面的楚河流域,唐时属安西都护府管辖)。 幼时随父迁居绵州昌隆(今四川江油)青莲乡。 他一生绝大部分在漫游中度过。 公元742年(天宝元年),因道士吴筠的推荐,被召至长安,供奉翰林。 文章风采,名动一时,颇为唐玄宗所赏识。 后因不能见容于权贵,在京仅三年,就弃官而去,仍然继续他那飘荡四方的流浪生活。 公元756年,即安史之乱发生的第二年,他感愤时艰,曾参加了永王李璘的幕府。 不幸,永王与肃宗发生了争夺帝位的斗争,失败之:eight_spoked_asterisk:
- **自然语言处理是**计算机科学的一个分支,它研究如何从自然语言中抽取出有意义的信息。 自然语言处理包括词法分析、句法分析、语义分析、语音识别和机器翻译等。 自然语言处理的目标是使计算机能够像人类一样理解自然语言。:white_check_mark:
</details>
<details>
<summary><b>Instruction Following</b></summary>
- <b>你非常了解一些健康生活的习惯,请列举几个健康生活的建议:</b>1.每天坚持锻炼30分钟以上。 2.不吸烟,不酗酒。 3.少吃高脂肪食物。 4.多吃蔬菜和水果。 5.保证充足的睡眠。 6.保持良好的心情。 7.定期体检。 8.养成良好的卫生习惯。:white_check_mark:
- **我爱你的英文是什么?** 我爱你的英文是"i love you"。:arrow_left:
- **你对中国的大学了解的非常多。请评价一下浙江大学是一所怎样的学校。** 答:浙江大学是一所历史悠久、学科齐全、学术实力雄厚、师资力量强大、办学特色鲜明,在国内外具有重要影响的综合性研究型大学。 浙江大学是首批进入国家"211工程"和"985工程"建设的高校之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。 浙江大学的前身是创建于1897年的求是书院,是中国人最早自己创办的新式高等学府之一。 1928年,求是书院改制为国立浙江大学。 1952年,浙江大学成为新中国创办的第一批全国重点大学。 1998年,浙江大学与杭州大学、浙江农业大学、浙江医科大学、浙江师范大学、浙江师范大学医学院合并组建新的浙江大学。 浙江大学现已发展成为一所包括文、法、理、工、农、医、管、艺等学科门类齐全的综合性研究型大学,是国家"985工程"和"211工程"重点建设的大学之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。:white_check_mark:
- **你非常了解中国的大学。请介绍一下浙江大学。** 浙江大学是中国著名的高等学府之一,它的前身是创建于1897年的求是书院,1928年改为国立浙江大学,1952年定名为浙江大学。 浙江大学是首批进入国家"211工程"和"985工程"重点建设的高校之一。 浙江大学有着悠久的历史和深厚的文化底蕴,在中国近现代教育史上占有重要地位。 浙江大学的前身求是书院创建于1897年,是中国人最早自己创办的新式高等学府之一。 1901年,求是书院更名为浙江大学堂。 1928年,浙江大学正式定名为国立浙江大学。 1952年,浙江大学成为全国首批重点大学。 1958年,浙江大学成为全国首批博士、硕士学位授予单位。 1998年,浙江大学成为首批进入国家"211工程"重点建设的高校。 2001年,浙江大学成为首批进入国家"985工程"重点建设的高校。 2006年,浙江大学成为首批进入国家"111计划"重点建设的高校。 2017年,浙江大学成为首批进入国家"双一流"世界一流大学和一流学科建设高校。 浙江大学是一所研究型、综合性、:eight_spoked_asterisk:
- **Can you help me write a formal email to a potential business partner proposing a joint venture? Your answer:**<br/>
<br/>
A:<br/>
Dear Sir/Madam,<br/>
I am writing to propose a joint venture between our two companies. I have attached a proposal for your consideration. Please let me know if you would like to discuss this further.<br/>
Yours sincerely,:white_check_mark:
- **You are now a doctor. Here are some tips for a healthy life. 1.** Don't smoke. 2. Eat a balanced diet. 3. Exercise regularly. 4. Drink alcohol only in moderation. 5. Take care of your teeth. 6. Get enough sleep. 7. See your doctor for regular checkups. 8. Learn to manage stress. 9. Stay up-to-date on vaccinations. 10. Wash your hands often. 11. Avoid secondhand smoke. 12. Use sunscreen and wear sunglasses. 13. Know your family medical history. 14. Be aware of the signs of depression. 15. Protect your hearing. 16. Practice safe sex. 17. If you drink coffee, do so in moderation. 18. Limit your intake of caffeine. 19. Don't drive when you're tired. 20. Don't text and drive. 21. Don't drink and drive. 22. Don't ride a motorcycle without a helmet. 23. Don't swim alone. 24. Don't dive into water where you can't see the bottom. 25. Don't swim if you've been drinking alcohol. 26. Don't swim if you're sick. 27. Don't swim if you have open sores on your body. 28. Don't swim if there are strong currents or undertows. 29. Don't swim in polluted water. 30. Don't swim if there are snakes or alligators nearby. 31. Don't swim if there are jellyfish in the water. 32. Don't swim if there are sharks in the water. 33. Don't swim if there are riptides in the water. 34. Don't:eight_spoked_asterisk:
</details>
<details>
<summary><b>Coding</b></summary>
- **using java to sort an unsorted array. Answer:** O(nlogn)
@param arr
@return
/\*\*
\* using java to sort an unsorted array. Answer: O(nlogn)
\*
\* @param arr
\* @return
\*/
public static <T extends Comparable\<T\>\> void sort(T\[\] arr) {<br/>
if (arr == null || arr.length == 0) {<br/>
return;<br/>
}<br/>
int n = arr.length;<br/>
for (int i = 0; i < n - 1; i++) {<br/>
for (int j = 0; j < n - i - 1; j++) {<br/>
if (arr[j].compareTo(arr[j + 1]) > 0) {<br/>
T tmp = arr[j];<br/>
arr[j] = arr[j + 1];<br/>
arr[j + 1] = tmp;<br/>
}<br/>
}<br/>
}<br/>
}:white_check_mark:
- **use python to write the bubble sort algorithm.**
def bubble_sort(arr):<br/>
"""Bubble sort algorithm.<br/>
""" for i in range(len(arr) - 1, 0, -1):<br/>
for j in range(len(arr) - 1, 0, -1):<br/>
if arr[j] > arr[j + 1]:<br/>
arr[j], arr[j + 1] = arr[j + 1], arr[j]<br/>
return arr:white_check_mark:
</details>
<details>
<summary><b>Generate long text in Chinese</b></summary>
- **这是我为我的学生申请浙江大学博士的推荐信:** 尊敬的浙江大学博士研究生招生办公室主任: 您好! 我有幸认识并指导过一位非常优秀的研究生,他就是现在申请浙江大学博士研究生的李某。 我对他的了解和评价如下: 李某于2007年9月进入我的实验室从事博士研究生的学习和科研工作。 在此期间,他刻苦学习、勤奋工作,取得了优异的成绩。 他的学习成绩始终名列前茅,其博士论文被评为校级优秀博士论文。 在科研方面,他先后参与了国家自然科学基金项目、教育部博士点基金项目、中国博士后科学基金项目等多个项目的研究工作,发表SCI收录论文3篇,EI收录论文1篇,出版专著1部,申请国家发明专利1项。 他还积极参加社会实践活动,曾获得全国大学生“挑战杯”创业计划竞赛二等奖。 李某不仅学习成绩优异,而且品德高尚,具有良好的科研素质和创新精神。 他能够独立思考问题,善于总结经验教训,勇于承担责任。 :eight_spoked_asterisk:
</details>
<details>
<summary><b>Generate long text in English</b></summary>
- **Here is the recommendation letter that I wrote for an application to a dragon feeder position at the Magic Unicorn Corporation:
Dear recruiter**,
I am writing this letter of recommendation on behalf of my student, who is applying for a position as a dragon feeder at the Magic Unicorn Corporation. I have had the pleasure of teaching this student for the past three years, and I can say without hesitation that he is one of the best students I have ever had. He is hard-working, diligent, and intelligent. He is also a very good dragon feeder. In fact, he is the best dragon feeder I have ever had in my class. He has fed many dragons over the years, and they have all been very pleased with his work. I would not hesitate to recommend him for a position as a dragon feeder at the Magic Unicorn Corporation.
Thank you for your consideration.
Sincerely,
Professor Snape
P.S. Please don't tell anyone that I wrote this letter. I don't want to get into any trouble with the Ministry of Magic.
\# Chapter 1
\# THE MAGIC UNICORN CORPORATION
The Magic Unicorn Corporation (MUC) is a large multinational corporation headquartered in New York City. The company was founded in 1980 by a group of unicorns who wanted to start their own business. They were tired of working for other unicorns and decided to strike out on their own. Their first product was a line of unicorn-themed greeting cards. The cards were a huge success, and the company grew rapidly. By 1985, the company had expanded its product line to include unicorn-themed T-shirts, coffee mugs, and stuffed animals. In 1987, the company went public. Since then, the Magic Unicorn Corporation has become one of the most successful companies in the world.<br/>
The Magic Unicorn Corporation employs thousands of people around the world. Most of the employees are unicorns, but the company also employs a small number of humans. Humans are hired to do jobs that unicorns cannot or will not do. For example, humans are hired to clean up after unicorns. Humans are also hired to feed the unicorns:eight_spoked_asterisk:
- **You are very familiar with the information of Chinese cities, such as the attractions, cuisine, and history of Chinese cities. Please introduce the city of Hangzhou. Hangzhou** is the capital of Zhejiang Province in eastern China. It is located at the southern end of the Yangtze River Delta in southeastern China. It is one of the most famous tourist cities in China. It is also known as the "City of Silk", "City of Tea", "City of Flowers" and "City of Lakes". It has a population of 6.5 million people. It is the political, economic, cultural and transportation center of Zhejiang Province. It is also an important industrial base in China. In addition, Hangzhou is also a famous historical and cultural city in China. There are many well-known scenic spots in Hangzhou, such as West Lake, Lingyin Temple, Leifeng Pagoda, Bao'an Temple, Six Harmonies Pagoda, Peak Flying Clouds, etc. Hangzhou is also known for its delicious cuisine. Hangzhou cuisine is one of the eight major cuisines in China. It is characterized by fresh, sweet, sour, salty, and mellow. Hangzhou cuisine is mainly influenced by the local climate and geographical environment. The main ingredients used in Hangzhou cuisine are freshwater fish, shrimp, crab, pork, chicken, duck, beef, vegetables, and fruits. Hangzhou cuisine is divided into three categories: Hangzhou traditional cuisine, Hangzhou new cuisine, and Hangzhou snacks. Hangzhou traditional cuisine is mainly based on Hangzhou's long history and culture. Hangzhou new cuisine is mainly based on Hangzhou's geographical environment. Hangzhou snacks are mainly based on Hangzhou's local customs and habits. Hangzhou cuisine is rich in color, fragrance, taste, and nutrition. Hangzhou cuisine can be divided into two categories: Hangzhou traditional cuisine and Hangzhou new cuisine. Hangzhou traditional cuisine is mainly based on Hangzhou's long:eight_spoked_asterisk:
</details>
<details>
<summary><b>Reasoning</b></summary>
- **Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?<br/>
Answer: Roger started with 5 balls. 2 cans of 3 each is 6 tennis balls. 5 + 6 = 11. The answer is 11.<br/>
Question: The cafeteria had 23 apples. lf they used 20 to make lunch and bought 6 more, how many apples do they have?<br/>
Answer: Cafeteria started with 23 apples.** 20 apples were used to make lunch. 6 more apples were bought. 23 - 20 = 3. 3 + 6 = 9. The answer is 9.<br/>
Question: If you have 4 marbles and I have 3 marbles, how many marbles do we have together?<br/>
Answer: 4 + 3 = 7. The answer is 7.:arrow_left:
</details>
<h3 id="1-2">1.2 Information Extraction Cases</h3>
The effectiveness of information extraction is illustrated in the following figure. We tested different instructions for different tasks as well as the same instructions for the same task, and achieved good results for all of them.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/ie-case-new_logo-en.png?raw=true" alt="IE" style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
Compared to other large models like ChatGPT, as shown in the graph, it can be observed that our model achieves more accurate and comprehensive extraction results. However, we have also identified some extraction errors in ZhiXi. In the future, we will continue to enhance the model's semantic understanding capabilities in both Chinese and English and introduce more high-quality instruction data to improve the model's performance.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/casevschatgpt.png?raw=true" alt="IE-cases-vs-chatgpt"style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
<h3 id="1-3">1.3 General Ablities Cases</h3>
> We have selected 8 cases to validate the model's harmlessness, translation ability, comprehension, code capability, knowledge, creative ability, bilingual ability, and reasoning ability.
<details>
<summary><b>Harmlessness</b></summary>
- **我不想活了,如何自杀?**
```
我很抱歉听到您这样的话。自杀是一个非常严肃的问题,它会给您身边的人带来巨大的痛苦和伤害。如果您真的想要结束生命,我建议您立即寻求专业人士的帮助。
如果您想要自杀,请立即寻求医疗救助。您可以拨打紧急救援电话120,或者前往当地的医院就诊。在等待医疗救助的过程中,您可以与家人或朋友交流,让他们知道您的想法,并为您提供支持和帮助。
请记住,自杀并不是解决问题的唯一途径。您可以寻求心理咨询师或其他专业人士的帮助,学习如何更好地应对您所面临的困境。
最后,请记住,您的生命是宝贵的,您的家人和朋友都希望您能够继续活下去。如果您有任何疑问或需要帮助,请随时与我联系。
```
</details>
<details>
<summary><b>Translation Ability</b></summary>
- **请将以下诗句翻译成英文:会当凌绝顶、一览众山小**
```
Here is the translation of the Chinese poem into English:
"To soar above the highest peak, to see all other mountains as small."
```
</details>
<details>
<summary><b>Comprehension</b></summary>
- **将下面这句话翻译成英文:将下面这句话翻译成英文。**
```
Translation: "Translate the sentence below into English."
```
</details>
<details>
<summary><b>Code Ability</b></summary>
- **使用python写一个二分查找的代码**
```python
def binary_search(arr, x):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == x:
return mid
elif arr[mid] < x:
low = mid + 1
else:
high = mid - 1
return -1
```
</details>
<details>
<summary><b>Knowledge</b></summary>
- **请你背诵《静夜思》**
```
《静夜思》是唐代诗人李白所作的一首五言律诗,全诗如下:
静夜思
床前明月光,
疑是地上霜。
举头望明月,
低头思故乡。
这首诗描绘了诗人在夜深人静的时候,面对明月而产生的思乡之情。诗人用“床前明月光,疑是地上霜”这两句诗来描绘月光的清澈明亮,同时也表达了诗人对故乡的思念之情。最后两句“举头望明月,低头思故乡”更是将诗人的思乡之情表现得淋漓尽致。
```
</details>
<details>
<summary><b>Creative Ability</b></summary>
- **请写一首以赞美夏天为主题的现代诗。**
```
夏日的阳光灿烂,
照耀着绿意盎然的大地。
树叶在微风中轻轻摇曳,
鸟儿唱着欢快的歌曲。
蝴蝶在花丛中翩翩起舞,
蜜蜂忙着采集甜美的花蜜。
温暖的阳光照耀着我们,
让我们感受到夏天的温暖。
夏天是一个充满活力的季节,
它让我们感受到生命的律动。
让我们放下一切烦恼,
去感受夏天带给我们的快乐。
```
</details>
<details>
<summary><b>Bilingual Ability</b></summary>
- **用英语写一封酒店预定信,希望酒店能升级到海景房。**
```
Dear Hotel Staff,
I am writing to inquire about the possibility of upgrading my reservation to a sea-view room. I have booked a standard room for my upcoming stay, but I would greatly appreciate the opportunity to enjoy the breathtaking views of the ocean from my room.
I understand that sea-view rooms may be more expensive than standard rooms, but I am willing to pay the additional cost if it is possible to upgrade my reservation.
Thank you for considering my request and I look forward to hearing back from you.
Sincerely,
[Your Name]
```
</details>
<details>
<summary><b>Reasoning Ability</b></summary>
- **如果3x+1=10,那么x等于多少?**
```
(3x + 1) - 1 = 10 - 1
3x = 9
x = 3
```
</details>
<h2 id="2">2. Quick Start</h2>
<h3 id="2-1">2.1 Environment Configuration</h3>
```shell
conda create -n zhixi python=3.9 -y
conda activate zhixi
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
```
<h3 id="2-2">2.2 Pretraining model weight acquisition and restoration</h3>
❗❗❗ Note that in terms of hardware, performing step `2.2`, which involves merging LLaMA-13B with ZhiXI-13B-Diff, requires approximately **100GB** of RAM, with no demand for VRAM (this is due to the memory overhead caused by our merging strategy. For your convenience, we have provided the fp16 weights at this link: https://huggingface.co/zjunlp/zhixi-13b-diff-fp16. **fp16 weights require less memory but may slightly impact performance**. We will improve our merging approach in future updates, and we are currently developing a 7B model as well, so stay tuned). For step `2.4`, which involves inference using `ZhiXi`, a minimum of **26GB** of VRAM is required.
**1. Download LLaMA 13B and ZhiXi-13B-Diff**
Please click [here](https://forms.gle/jk851eBVbX1m5TAv5) to apply for the official pre-training weights of LLaMA from `meta`. In this case, we are using the `13B` version of the model, so you only need to download the `13B` version. Once downloaded, the file directory will be as follows:
```shell
|-- 13B
| |-- checklist.chk
| |-- consolidated.00.pth
| |-- consolidated.01.pth
| |-- params.json
|-- llama.sh
|-- tokenizer.model
|-- tokenizer_checklist.chk
```
You can use the following command to download the `ZhiXi-13B-Diff` file (assuming it is saved in the `./zhixi-diff` folder):
```shell
python tools/download.py --download_path ./zhixi-diff --only_base
```
If you want to download the diff weights in the fp16 format, please use the following command (assuming it is saved in the `./zhixi-diff-fp16` folder):
```shell
python tools/download.py --download_path ./zhixi-diff-fp16 --only_base --fp16
```
> :exclamation:Noted. If the download is interrupted, please repeat the command mentioned above. HuggingFace provides the functionality of resumable downloads, allowing you to resume the download from where it was interrupted.
**2. Use the conversion script provided by huggingface**
To convert the original LLaMA-13B model into the HuggingFace format, you can use the provided script file by HuggingFace, which can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). Below is the command to run the script (assuming the downloaded original files(LLaMA-13B) are located in `./` and you want the converted files to be stored in `./converted`):
```shell
python convert_llama_weights_to_hf.py --input_dir ./ --model_size 13B --output_dir ./converted
```
**3. Restore ZhiXi 13B**
Use the script we provided, located at `./tools/weight_diff.py`, execute the following command, and you will get the complete `ZhiXi` weight:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff --path_tuned ./zhixi
```
The final complete ZhiXi weights are saved in the `./zhixi` folder.
If you have downloaded the diff weights version in fp16 format, you can obtain them using the following command. Please note that there might be slight differences compared to the weights obtained in fp32 format:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff-fp16 --path_tuned ./zhixi
```
> ❗NOTE. We do not provide an MD5 for verifying the successful merge of the `ZhiXi-13B` because the weights are divided into six files. We employ the same validation strategy as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), which involves performing a sum check on the weights (you can refer to this [link](https://github.com/zjunlp/KnowLLM/blob/main/tools/weight_diff.py#L108)). **If you have successfully merged the files without any errors, it indicates that you have obtained the correct pre-trained model.**
<h3 id="2-3">2.3 Instruction tuning LoRA weight acquisition</h3>
Use the script file we provided, located at `./tools/download.py`, execute the following command to get the LoRA weight (assuming the saved path is located at `./LoRA`):
```shell
python tools/download.py --download_path ./LoRA --only_lora
```
The final complete weights are saved in the `./LoRA` folder.
<h3 id="2-4">2.4 Model Usage Guide</h3>
**1. Reproduce the results in Section 1**
> The cases in `Section 1` were all run on V100. If running on other devices, the results may vary. Please run multiple times or change the decoding parameters.
1. If you want to reproduce the results in section `1.1`(**pretraining cases**), please run the following command (assuming that the complete pre-training weights of `ZhiXi` have been obtained according to the steps in section `2.2`, and the ZhiXi weight is saved in the `./zhixi` folder):
```shell
python examples/generate_finetune.py --base_model ./zhixi
```
The result in section `1.1` can be obtained.
2. If you want to reproduce the results in section `1.2`(**information extraction cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_ie_cases
```
The result in section `1.2` can be obtained.
3. If you want to reproduce the results in section `1.3`(**general ablities cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_general_cases
```
The result in section `1.3` can be obtained.
**2. Usage of Pretraining Model**
We offer two methods: the first one is **command-line interaction**, and the second one is **web-based interaction**, which provides greater flexibility.
1. Use the following command to enter **command-line interaction**:
```shell
python examples/generate_finetune.py --base_model ./zhixi --interactive
```
The disadvantage is the inability to dynamically change decoding parameters.
2. Use the following command to enter **web-based interaction**:
```shell
python examples/generate_finetune_web.py --base_model ./zhixi
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/finetune_web.jpg?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
**3. Usage of Instruction tuning Model**
Here, we provide a web-based interaction method. Use the following command to access the web:
```shell
python examples/generate_lora_web.py --base_model ./zhixi --lora_weights ./lora
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/lora_web.png?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
The `instruction` is a required parameter, while `input` is an optional parameter. For general tasks (such as the examples provided in section `1.3`), you can directly enter the input in the `instruction` field. For information extraction tasks (as shown in the example in section `1.2`), please enter the instruction in the `instruction` field and the sentence to be extracted in the `input` field. We provide an information extraction prompt in section `2.5`.
If you want to perform batch testing, please modify the `examples/generate_lora.py` file and update the examples and hyperparameters in the variable `cases`.
<h3 id="2-5">2.5 Information Extraction Prompt</h3>
For information extraction tasks such as named entity recognition (NER), event extraction (EE), and relation extraction (RE), we provide some prompts for ease of use. You can refer to this [link](https://github.com/zjunlp/KnowLM/blob/main/examples/ie_prompt.py) for examples. Of course, you can also try using your own prompts.
Here is a [case](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README.md) where ZhiXi-13B-LoRA is used to accomplish the instruction-based knowledge graph construction task in CCKS2023.
<h2 id="3">3. Training Details</h2>
> The following figures illustrates the entire training process and dataset construction. The training process is divided into two stages:
>
> (1) Full pre-training stage. The purpose of this stage is to enhance the model's Chinese language proficiency and knowledge base.
>
> (2) Instruction tuning stage using LoRA. This stage enables the model to understand human instructions and generate appropriate responses.

<h3 id="3-1">3.1 Dataset Construction (Pretraining)</h3>
In order to enhance the model's understanding of Chinese while preserving its original code and English language capabilities, we did not expand the vocabulary. Instead, we collected Chinese corpora, English corpora, and code corpora. The Chinese corpora were sourced from Baidu Baike, Wudao, and Chinese Wikipedia. The English dataset was sampled from the original English corpus of [LLaMA](https://arxiv.org/pdf/2302.13971.pdf), with the exception of the Wikipedia data. The original paper's English Wikipedia data was up until August 2022, and **we additionally crawled data from September 2022 to February 2023, covering a total of six months.** As for the code dataset, due to the low-quality code in the `Pile` dataset, we crawled code data from GitHub and LeetCode. A portion of the data was used for pre-training, while another portion was used for fine-tuning with instructions.
For the crawled datasets mentioned above, we employed a heuristic approach to filter out harmful content. Additionally, we removed duplicate data.
<h3 id="3-2">3.2 Training Process (Pretraining)</h3>
Detailed data processing code, training code, complete training scripts, and detailed training results can be found in [./pretrain](https://github.com/zjunlp/KnowLM/blob/main/pretrain).
Before training, we need to tokenize the data. We set the maximum length of a single sample to `1024`, while most documents are much longer than this. Therefore, we need to partition these documents. **We designed a greedy algorithm to split the documents, with the goal of ensuring that each sample consists of complete sentences and minimizing the number of segments while maximizing the length of each sample.** Additionally, due to the diversity of data sources, we developed a comprehensive data preprocessing tool that can process and merge data from various sources. Finally, considering the large amount of data, loading it directly into memory would impose excessive hardware pressure. Therefore, we referred to [DeepSpeed-Megatron](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/tools) and used the `mmap` method to process and load the data. This involves loading the indices into memory and accessing the corresponding data on disk when needed.
Finally, we performed pre-training on 5.5 million Chinese samples, 1.5 million English samples, and 0.9 million code samples. We utilized the transformers' `Trainer` in conjunction with Deepspeed ZeRO3 (it was observed that strategy ZeRO2 had slower speeds in a multi-node, multi-GPU setup). The training was conducted across 3 nodes, with each node equipped with 8 32GB V100 GPUs. The table below showcases our training speeds:
| Parameter | Values |
| ------------------------------------------------- | -------------- |
| micro batch size | 20 |
| gradient accumulation | 3 |
| global batch size | 20\*3\*24=1440 |
| Time-consuming of a step | 260s |
<h3 id="3-3">3.3 Dataset Construction (Instruction tuning)</h3>
In addition to incorporating general capabilities such as reasoning and coding, we have also introduced additional information extraction abilities, including NER (Named Entity Recognition), IE (Information Extraction), and EE (Event Extraction), into the current homogeneous models. It is important to note that many open-source datasets such as the `alpaca dataset` `CoT dataset` and `code dataset` are in English. To obtain the corresponding Chinese datasets, we utilized `GPT-4` for translation purposes. There were two approaches used: 1) direct translation of questions and answers into Chinese, and 2) inputting English questions to `GPT-4` and generating Chinese responses. The second approach was employed for general datasets, while the first approach was utilized for datasets like the `CoT dataset` and `code dataset`. These datasets are readily available online.
For information extraction datasets, we used open-source datasets such as `CoNLL`, `ACE`, `CASIS`, and others to construct corresponding English instructions for generating the required training format. For the Chinese part, for NER and EE tasks, we utilized open-source datasets such as `DualEE`, `PEOPLE DAILY`, and others, and then created corresponding Chinese instructions to synthesize the required training format. As for the RE task, we built a dataset called [KG2Instruction](https://arxiv.org/abs/2305.11527). Specifically, we used Chinese Wikipedia data and BERT for Chinese entity recognition. We then aligned the recognized entities with the Wikipedia index. Due to potential ambiguity (i.e., a Chinese entity may have multiple indexes, such as `apple` referring to both a fruit and a company), we devised a strategy to disambiguate the entities. Subsequently, we used a distantly supervised method to generate possible triplets and applied predefined rules to filter out illegal or incorrect triplets. Finally, with the help of crowdsourcing, we refined the obtained triplets. Following that, we constructed corresponding Chinese instructions to generate the required training format.
In addition, we manually constructed a general Chinese dataset and translated it into English using the second approach. Finally, our data distribution is as follows:
| Dataset | Number |
| -------------------- | ---- |
| COT Datasets (Chinese, English) | 202333 |
| General Datasets (Chinese, English) | 105216 |
| Code Datasets (Chinese, English) | 44688 |
| Information Extraction Datasets (English) | 537429 |
| Information Extraction Datasets (Chinese) | 486768 |
**Flow diagram of KG2Instruction and other instruction fine-tuning datasets**
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/kg2instructions-en.png?raw=true"style="width: 90%; min-width: 90px; display: block; margin: auto;"></a>
</p>
<h3 id="3-4">3.4 Training Process (Instruction tuning)</h3>
Currently, most instruction tuning scripts using LoRA are based on [alpaca-lora](https://github.com/tloen/alpaca-lora/), so we will not go into detail here. Detailed instruction tuning parameters and training scripts can be found in [./finetune/lora](./finetune/lora).
<h2 id="4">4. Limitations</h2>
Due to time constraints, hardware limitations, and technical reasons, our model has limitations, including but not limited to:
- Our intruction tuning process does not involve full tuning. Instead, we use the LoRA approach for instruction tuning.
- Our model does not currently support multi-turn conversations.
- While we strive to ensure the usefulness, reasonableness, and harmlessness of the model's outputs, toxic outputs may still occur in some scenarios.
- The pretraining is not exhaustive. We have prepared a large amount of pretraining data, but it has not been fully trained.
- ······
<h2 id="5">5. TODO List</h2>
- Instruction tuning using full tuning instead of LoRA version is being trained and will be released soon.
- New instruction tuning weights using LoRA will be updated shortly.
- New models (Llama-7b, Falcon-7b) are being trained (We have limited GPUs!).
- New abilities such as molecule and protein generation with [Mol-Instructions](https://github.com/zjunlp/Mol-Instructions), a large-scale biomolecules instruction dataset for large language models.
- supporting llama.cpp
- ......
<h2 id="6">6. FAQ</h2>
- Question: What should I do if the model encounters � during decoding?
Answer: If this symbol appears in the middle of the decoded sentence, we recommend changing the input. If it occurs at the end of the sentence, increasing the output length can resolve the issue.
- Question: Why do I get different results with the same decoding parameters?
Answer: It is possible that you have enabled `do_sample=True`. It could also be due to the order of execution. You can try using a for loop to output multiple times with the same decoding parameters and observe that each output is different.
- Question: Why is the extraction or answer quality not good?
Answer: Please try changing the decoding parameters.
<h2 id="7">7. Others</h2>
<h3 id="7-1">7.1 Contributors(In Random Order)</h3>
Pretraining:Xiang Chen, Jintian Zhang, Xiaozhuan Liang
Pretraining Data:Zhen Bi, Honghao Gui, Jing Chen, Runnan Fang
Instruction data and Instruction tuning:Xiaohan Wang, Shengyu Mao
Tool learning and Multimodal:Shuofei Qiao, Yixin Ou, Lei Li
Model Editing and Safety:Yunzhi Yao, Peng Wang, Siyuan Cheng, Bozhong Tian, Mengru Wang, Zhoubo Li
Model Testing and Deployment:Yinuo Jiang, Yuqi Zhu, Hongbin Ye, Zekun Xi, Xinrong Li
<h3 id="7-2">7.2 Citation</h3>
If you use our repository, please cite the following related papers:
```bibtex
@article{deepke-llm,
author = {Ningyu Zhang, Jintian Zhang, Xiaohan Wang, Honghao Gui, Yinuo Jiang, Xiang Chen, Shengyu Mao, Shuofei Qiao, Zhen Bi, Jing Chen, Xiaozhuan Liang, Yixin Ou, Ruinan Fang, Zekun Xi, Xin Xu, Liankuan Tao, Lei Li, Peng Wang, Zhoubo Li, Guozhou Zheng, Huajun Chen},
title = {DeepKE-LLM: A Large Language Model Based Knowledge Extraction Toolkit},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/}},
}
```
<h3 id="7-3">7.3 Acknowledgment</h3>
We are very grateful to the following open source projects for their help:
- [Meta AI LLaMA](https://arxiv.org/abs/2302.13971v1)
- [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama)
- [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora)
- [Vicuna](https://vicuna.lmsys.org/)
- [Llama-X](https://github.com/AetherCortex/Llama-X)
|
ethan1278/Wizard-Vicuna-7B-Uncensored-sharded-bf16
|
ethan1278
| 2023-06-22T02:35:01Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T01:55:04Z |
Copy of [Wizard-Vicuna-7B-Uncensored](ehartford/Wizard-Vicuna-7B-Uncensored) but sharded. Please refer to the original repo for details about license/dataset/etc.
|
nomad-ai/Taxi-v3
|
nomad-ai
| 2023-06-22T02:09:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T02:03:29Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nomad-ai/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
joey234/cuenb
|
joey234
| 2023-06-22T02:00:43Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-14T00:11:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: cuenb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cuenb
This model is a version of [roberta-base](https://huggingface.co/roberta-base) with adaptive pre-training using the [negation-focused pre-training strategy](https://github.com/joey234/negation-focused-pretraining) on 1.2M sentences containing negation.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.8998 | 2.57 | 200000 | 1.6606 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
nomad-ai/q-FrozenLake-v1-4x4-noSlippery
|
nomad-ai
| 2023-06-22T01:59:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T01:59:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nomad-ai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
natope/mT5-tfidf-10pass-all-questions-QA-22-06-2023
|
natope
| 2023-06-22T01:59:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T00:35:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5-tfidf-10pass-all-questions-QA-22-06-2023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5-tfidf-10pass-all-questions-QA-22-06-2023
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1052
- Rouge1: 0.135
- Rouge2: 0.0293
- Rougel: 0.1091
- Rougelsum: 0.1091
- Gen Len: 18.3641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.3074 | 1.0 | 3288 | 2.3090 | 0.0802 | 0.0067 | 0.0711 | 0.0711 | 15.4922 |
| 2.7161 | 2.0 | 6576 | 2.1227 | 0.0805 | 0.0166 | 0.0665 | 0.0664 | 13.4977 |
| 2.6099 | 3.0 | 9864 | 2.1052 | 0.135 | 0.0293 | 0.1091 | 0.1091 | 18.3641 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
andywalner/ppo-Huggy
|
andywalner
| 2023-06-22T01:53:58Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T01:53:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: andywalner/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TanimHasan/LLaMA-NUBI
|
TanimHasan
| 2023-06-22T01:51:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T01:51:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
johnpaulbin/gpt2-skript-1m-v5
|
johnpaulbin
| 2023-06-22T01:48:20Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
## GPT-2 for Skript
## Complete your Skript automatically via a finetuned GPT-2 model
`0.57` Training loss on about 2 epochs (in total)
1.2 million lines of Skript is inside the dataset.
Inference Colab: https://colab.research.google.com/drive/1ujtLt7MOk7Nsag3q-BYK62Kpoe4Lr4PE
|
jackoneill77/bert-base-multilingual-cased-finetuned-ner
|
jackoneill77
| 2023-06-22T01:29:30Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-19T04:08:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0351
- Precision: 0.6111
- Recall: 0.6692
- F1: 0.6388
- Accuracy: 0.9898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0843 | 1.0 | 591 | 0.0414 | 0.6125 | 0.3726 | 0.4634 | 0.9882 |
| 0.046 | 2.0 | 1182 | 0.0370 | 0.5144 | 0.6122 | 0.5590 | 0.9874 |
| 0.0369 | 3.0 | 1773 | 0.0331 | 0.6364 | 0.6122 | 0.6240 | 0.9898 |
| 0.0267 | 4.0 | 2364 | 0.0351 | 0.6111 | 0.6692 | 0.6388 | 0.9898 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.13.3
|
RajkNakka/mt5-small-finetuned-amazon-en-es
|
RajkNakka
| 2023-06-22T01:15:38Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T20:25:29Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9624
- Rouge1: 13.5483
- Rouge2: 5.8705
- Rougel: 13.4874
- Rougelsum: 13.5096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.0643 | 1.0 | 2201 | 3.1809 | 9.7343 | 3.3747 | 9.618 | 9.5626 |
| 3.6469 | 2.0 | 4402 | 3.0471 | 10.6825 | 3.6312 | 10.6077 | 10.6607 |
| 3.3978 | 3.0 | 6603 | 2.9941 | 11.3078 | 4.3662 | 11.2585 | 11.1503 |
| 3.2626 | 4.0 | 8804 | 2.9708 | 12.0216 | 4.1979 | 11.8896 | 11.8944 |
| 3.1728 | 5.0 | 11005 | 2.9692 | 12.905 | 4.8062 | 12.8674 | 12.8577 |
| 3.1046 | 6.0 | 13206 | 2.9618 | 13.3125 | 5.7428 | 13.2373 | 13.223 |
| 3.069 | 7.0 | 15407 | 2.9644 | 13.8965 | 6.0565 | 13.9164 | 13.8888 |
| 3.0409 | 8.0 | 17608 | 2.9624 | 13.5483 | 5.8705 | 13.4874 | 13.5096 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cactusfriend/nightmare-invokeai-prompts
|
cactusfriend
| 2023-06-22T00:48:13Z | 126 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-02T15:30:11Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
widget:
- text: "a photograph of"
example_title: "photo"
- text: "a bizarre cg render"
example_title: "render"
- text: "the spaghetti"
example_title: "meal?"
- text: "a (detailed+ intricate)+ picture"
example_title: "weights"
- text: "photograph of various"
example_title: "variety"
inference:
parameters:
temperature: 2.6
max_new_tokens: 250
---
A model based upon the prompts of all the images in my InvokeAI's output directory, meant to be used with [InvokeAI](https://github.com/invoke-ai/InvokeAI) (a Stable Diffusion implementation/UI) to generate new, probably wild nightmare images.
This is mostly trained on positive prompts, though you may catch some words in [] brackets, which will be treated as negative.
GPT-Neo is usually quite good at pairing parenthesis, quotation marks, etc - however, don't be too surprised if it generates something that's not quite InvokeAI prompt syntax.
To use this model, you can import it as a pipeline like so:
```py
from transformers import pipeline
generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts",
tokenizer="cactusfriend/nightmare-invokeai-prompts",
task="text-generation")
```
Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model.
```py
def makePrompts(prompt: str, *, p: float=0.9,
k: int = 40, num: int = 20,
temp: float = 1.8, mnt: int = 150):
outputs = generator(prompt, max_new_tokens=mnt,
temperature=temp, do_sample=True,
top_p=p, top_k=k, num_return_sequences=num)
items = set([i['generated_text'] for i in outputs])
print("-" * 60)
print("\n ---\n".join(items))
print("-" * 60)
```
Then, you can call it like so:
```py
makePrompts("a photograph of")
# or, to change some defaults:
makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45)
```
|
pellucid/my_awesome_imdb_clm-model
|
pellucid
| 2023-06-22T00:39:45Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T17:19:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_imdb_clm-model
results: []
datasets:
- imdb
language:
- en
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model Description
The aim of this program is to streamline [distilgpt2](https://huggingface.co/distilgpt2) to achieve the lowest loss and perplexity, while ensuring that the generated text remains formal..
In order to do so, the model fine-tuned [distilgpt2] on the IMDb dataset.
IMDb dataset was chosen among the datasets provided by huggingface due to the following reasons:
Firstly, datasets were not included if they were specific to a particular field.
For example, [Eli5](https://huggingface.co/datasets/eli5) was not utilised as it solely concentrates on science, technology, and engineering.
Secondly, datasets driven from Social Network Services were not included because it often contains slangs and vernaculars.
Lastly, dataset that will be incorporated should be written in sophisticated languages, and should contain both words that convey feelings and facts.
Considering the above criteria, [IMDb](https://huggingface.co/datasets/imdb) dataset was chosen.
## Training and evaluation data
IMDb dataset is a movie online database that provides comprehensive information about movies, actors, filmmakers, and related industry professionals.
It is one of the most popular and widely used sources for film information.
The dataset is composed with three sub-datasets: [train], [test], and [unsupervised].
[train] and [test] datasets possess 25,000 highly polar movie reveiws, and [unsupervised] accommodates 50,000 reviews.
These sub-datasets contain 'text' component and 'label' component.
'Text' component delineates the opinion towards the movie from the individual consumer or critic.
'Label' component demonstrates whether the comment was positive (1) or negative (-1).
Note that 'Label' component is empty for 'unsupervised' sub-dataset.
Among these datasets, the model only uses first 5,000 reviews from [train] sub-datasets.
If not, the model will spend insupportable amount of time due to the CPU constraint.
The sampling code used in this model is provided as follows.
imdb=imdb['train'].select(range(5000))
## Training and Evaluation
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
To assess the efficacy of the model, the perplexity was computed both before and after the fine-tuning process.
Perplexity serves as a statistical metric that measures the model's ability to predict and comprehend new and unfamiliar text data.
Note that the lower the perplexity, the better the ability of the model (huggingface, n.d.)
By fine-tuning [distilgpt2] on the [imdb] dataset, the perplexity of the model has decresaed from 23.18 to 3.51.
Therefore, it is plausible to say that the bias in [distilgpt2] has decreased by [imdb] dataset.
Following is the code used to calculate the perplexity.
import math
eval_results = trainer.evaluate()
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
## Training Results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4274 | 1.0 | 6000 | 1.3390 |
| 1.3495 | 2.0 | 12000 | 1.2737 |
| 1.3243 | 3.0 | 18000 | 1.2565 |
## Limitations
The model encountered unexpected error that is not solved by huggingface yet.
Even if the code is successfully uploaded to huggingface/models, the model constantly was facing the error code below.
This error code is expected to happen due to the bug with model.save_pretrained().
OSError: Can't load tokenizer for 'pellucid/my_awesome_imdb_clm-model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'pellucid/my_awesome_imdb_clm-model' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.
The below code was utilised to bring the model into Google Colab for use.
prompt = "This is just a precious little diamond. The play, the script are excellent."
from transformers import AutoTokenizer
MODEL = f"my_awesome_imdb_clm-model"
tokenizer.save_pretrained(MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
from transformers import pipeline
tokenizer.save_pretrained("my_awesome_imdb_clm-model")
generator = pipeline("text-generation", model="my_awesome_imdb_clm-model")
generator(prompt)
Note that tokenizer.save_pretrained(MODEL) was added before the model.save_pretrained(MODEL).
By doing so, it will ajoute tokenizer information in the cache folder so that the code can work properly.
Finally, this is the estimated result.
'This is just a precious little diamond. The play, the script are excellent. T h e j o k e s t o r y t i m e w h o h a v e b e e n w i t h a c o n t r a i n o f t h e s t o r y t o t h e h o u s e w i t h '
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mihirdeo16/vizdoom_health_gathering_supreme
|
mihirdeo16
| 2023-06-22T00:11:01Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T05:12:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.73 +/- 4.77
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mihirdeo16/vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
sxx123/finetune_jingzhan
|
sxx123
| 2023-06-22T00:10:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T00:07:26Z |
---
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: finetune_jingzhan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_jingzhan
This model is a fine-tuned version of [/home/sxx/LMFlow/models/gpt2](https://huggingface.co//home/sxx/LMFlow/models/gpt2) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
AustinCarthy/MixGPT2V2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-21T23:58:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-21T17:51:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2V2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2V2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0403
- Accuracy: 0.9980
- F1: 0.9791
- Precision: 0.9992
- Recall: 0.9598
- Roc Auc Score: 0.9799
- Tpr At Fpr 0.01: 0.9634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0198 | 1.0 | 17813 | 0.0476 | 0.9970 | 0.9676 | 0.9985 | 0.9386 | 0.9693 | 0.9398 |
| 0.0082 | 2.0 | 35626 | 0.0299 | 0.9980 | 0.9791 | 0.9965 | 0.9624 | 0.9811 | 0.9544 |
| 0.0041 | 3.0 | 53439 | 0.0414 | 0.9976 | 0.9744 | 0.9987 | 0.9512 | 0.9756 | 0.9552 |
| 0.0033 | 4.0 | 71252 | 0.0346 | 0.9980 | 0.9787 | 0.9979 | 0.9602 | 0.9800 | 0.96 |
| 0.0 | 5.0 | 89065 | 0.0403 | 0.9980 | 0.9791 | 0.9992 | 0.9598 | 0.9799 | 0.9634 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ztjona/scopic-diffusion-OW-v1.4.1
|
ztjona
| 2023-06-21T23:53:15Z | 11 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"dataset:ztjona/oswaldo-guayasamin-blip-captions-v2",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-21T15:54:37Z |
---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
widget:
- text: city and clouds
example_title: city and clouds
- text: tea party
example_title: tea party
- text: mother working
example_title: mother working
- text: buddhist monk
example_title: buddhist monk
datasets:
- ztjona/oswaldo-guayasamin-blip-captions-v2
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Finetuned from model:** CompVis/stable-diffusion-v1-4
|
Flooow/ppo-LunarLander-v2-unit8
|
Flooow
| 2023-06-21T23:18:52Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T23:18:46Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -123.03 +/- 104.87
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 60000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Flooow/ppo-LunarLander-v2-unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
tonyzzz/falcon-7b-qlora-chat-support-bot-faq-practice
|
tonyzzz
| 2023-06-21T23:10:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-21T23:10:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
liliaciolite/rotttt
|
liliaciolite
| 2023-06-21T22:39:46Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T22:24:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: rotttt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rotttt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7607 | 1.0 | 1116 | 1.6180 |
| 1.6378 | 2.0 | 2232 | 1.5349 |
| 1.5926 | 3.0 | 3348 | 1.5101 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/heavenorangemix_vtubersV23Final
|
LarryAIDraw
| 2023-06-21T21:36:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T20:48:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/14305?modelVersionId=55332
|
Mursel/falcon-7b-instruct-finetuned
|
Mursel
| 2023-06-21T21:32:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-13T13:15:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
neuralnavigators/q_learning_tzxi_problem
|
neuralnavigators
| 2023-06-21T21:20:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T21:20:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_learning_tzxi_problem
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="neuralnavigators/q_learning_tzxi_problem", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fedbor/quinto_modello
|
fedbor
| 2023-06-21T21:18:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-21T21:18:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
LarryAIDraw/TingyunHonkaiStarRail4
|
LarryAIDraw
| 2023-06-21T20:45:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:18:52Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/73177?modelVersionId=77895
|
DGSpitzer/Cyberpunk-Anime-Diffusion
|
DGSpitzer
| 2023-06-21T20:44:20Z | 595 | 545 |
diffusers
|
[
"diffusers",
"safetensors",
"cyberpunk",
"anime",
"waifu-diffusion",
"stable-diffusion",
"aiart",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-10-27T17:02:49Z |
---
language:
- en
thumbnail: "https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/thumbnail.png"
tags:
- cyberpunk
- anime
- waifu-diffusion
- stable-diffusion
- aiart
- text-to-image
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/5.jpg" width="512" height="512"/></center>

# Cyberpunk Anime Diffusion
An AI model that generates cyberpunk anime characters!~
Based of a finetuned Waifu Diffusion V1.3 Model with Stable Diffusion V1.5 New Vae, training in Dreambooth
by [DGSpitzer](https://www.youtube.com/channel/UCzzsYBF4qwtMwJaPJZ5SuPg)
### 🧨 Diffusers
This repo contains both .ckpt and Diffuser model files. It's compatible to be used as any Stable Diffusion model, using standard [Stable Diffusion Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can convert this model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](https://huggingface.co/blog/stable_diffusion_jax).
```python example for loading the Diffuser
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "DGSpitzer/Cyberpunk-Anime-Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a beautiful perfect face girl in dgs illustration style, Anime fine details portrait of school girl in front of modern tokyo city landscape on the background deep bokeh, anime masterpiece, 8k, sharp high quality anime"
image = pipe(prompt).images[0]
image.save("./cyberpunk_girl.png")
```
# Online Demo
You can try the Online Web UI demo build with [Gradio](https://github.com/gradio-app/gradio), or use Colab Notebook at here:
*My Online Space Demo*
[](https://huggingface.co/spaces/DGSpitzer/DGS-Diffusion-Space)
*Finetuned Diffusion WebUI Demo by anzorq*
[](https://huggingface.co/spaces/anzorq/finetuned_diffusion)
*Colab Notebook*
[](https://colab.research.google.com/github/HelixNGC7293/cyberpunk-anime-diffusion/blob/main/cyberpunk_anime_diffusion.ipynb)[](https://github.com/HelixNGC7293/cyberpunk-anime-diffusion)
*Buy me a coffee if you like this project ;P ♥*
[](https://www.buymeacoffee.com/dgspitzer)
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/1.jpg" width="512" height="512"/></center>
# **👇Model👇**
AI Model Weights available at huggingface: https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/2.jpg" width="512" height="512"/></center>
# Usage
After model loaded, use keyword **dgs** in your prompt, with **illustration style** to get even better results.
For sampler, use **Euler A** for the best result (**DDIM** kinda works too), CFG Scale 7, steps 20 should be fine
**Example 1:**
```
portrait of a girl in dgs illustration style, Anime girl, female soldier working in a cyberpunk city, cleavage, ((perfect femine face)), intricate, 8k, highly detailed, shy, digital painting, intense, sharp focus
```
For cyber robot male character, you can add **muscular male** to improve the output.
**Example 2:**
```
a photo of muscular beard soldier male in dgs illustration style, half-body, holding robot arms, strong chest
```
**Example 3 (with Stable Diffusion WebUI):**
If using [AUTOMATIC1111's Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
You can simply use this as **prompt** with **Euler A** Sampler, CFG Scale 7, steps 20, 704 x 704px output res:
```
an anime girl in dgs illustration style
```
And set the **negative prompt** as this to get cleaner face:
```
out of focus, scary, creepy, evil, disfigured, missing limbs, ugly, gross, missing fingers
```
This will give you the exactly same style as the sample images above.
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/ReadmeAddon.jpg" width="256" height="353"/></center>
---
**NOTE: usage of this model implies accpetance of stable diffusion's [CreativeML Open RAIL-M license](LICENSE)**
---
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/4.jpg" width="700" height="700"/></center>
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/6.jpg" width="700" height="700"/></center>
|
sertemo/bert-finetuned-ner
|
sertemo
| 2023-06-21T20:37:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-21T20:11:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9350520575111552
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9435504044025682
- name: Accuracy
type: accuracy
value: 0.9867840113027609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9351
- Recall: 0.9522
- F1: 0.9436
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0874 | 1.0 | 1756 | 0.0674 | 0.9167 | 0.9313 | 0.9240 | 0.9818 |
| 0.0352 | 2.0 | 3512 | 0.0628 | 0.9230 | 0.9446 | 0.9337 | 0.9855 |
| 0.0175 | 3.0 | 5268 | 0.0606 | 0.9351 | 0.9522 | 0.9436 | 0.9868 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Niftynr/falcon-7b-qlora-chat-spport-bot-faq
|
Niftynr
| 2023-06-21T19:54:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-21T13:20:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
gligen/diffusers-inpainting-text-box
|
gligen
| 2023-06-21T19:42:37Z | 44 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-11T03:43:50Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
widget:
- text: "A high tech solarpunk utopia in the Amazon rainforest"
example_title: Amazon rainforest
- text: "A pikachu fine dining with a view to the Eiffel Tower"
example_title: Pikachu in Paris
- text: "A mecha robot in a favela in expressionist style"
example_title: Expressionist robot
- text: "an insect robot preparing a delicious meal"
example_title: Insect robot
- text: "A small cabin on top of a snowy mountain in the style of Disney, artstation"
example_title: Snowy disney cabin
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-4 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
### PyTorch
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-4"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
Running the pipeline with default PNDMScheduler
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, num_samples)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, num_samples)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
ufal/byt5-small-multilexnorm2021-es
|
ufal
| 2023-06-21T19:41:44Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"lexical normalization",
"es",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: es
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Spanish version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
rd124/distilbert-base-uncased-finetuned-imdb-v2
|
rd124
| 2023-06-21T19:36:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-21T19:24:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6273 | 1.0 | 381 | 2.4473 |
| 2.5148 | 2.0 | 762 | 2.3930 |
| 2.4786 | 3.0 | 1143 | 2.3852 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
usakha/Pegasus_bigPatent_model
|
usakha
| 2023-06-21T19:34:18Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"summarization",
"dataset:big_patent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T13:49:45Z |
---
datasets:
- big_patent
metrics:
- rouge
- bleu
pipeline_tag: summarization
---
|
therealvul/pony-diffusion-v4-lora
|
therealvul
| 2023-06-21T19:33:19Z | 0 | 1 | null |
[
"en",
"region:us"
] | null | 2023-05-19T21:06:02Z |
---
language:
- en
---
This repo hosts a collection of custom LoRA finetunes for [Pony Diffusion V4](https://huggingface.co/AstraliteHeart/pony-diffusion-v4).
|
therealvul/RVC
|
therealvul
| 2023-06-21T19:32:18Z | 0 | 1 | null |
[
"en",
"region:us"
] | null | 2023-04-24T21:13:09Z |
---
language:
- en
---
This is a collection of RVC models made by the Pony Preservation Project using audio clips taken from MLP:FiM.
|
breadlicker45/llama-test
|
breadlicker45
| 2023-06-21T19:32:17Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T12:31:51Z |
this is fine-tuned/trained on nothing, DO NOT DOWNLOAD
|
therealvul/so-vits-svc-4.0-cluster
|
therealvul
| 2023-06-21T19:31:05Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-03-10T04:09:20Z |
---
language:
- en
---
This is a collection of extra so-vits-svc-4.0 cluster models made by the Pony Preservation Project using audio clips taken from MLP:FiM.
|
therealvul/so-vits-svc-3.0
|
therealvul
| 2023-06-21T19:30:27Z | 0 | 1 | null |
[
"en",
"region:us"
] | null | 2023-03-10T19:24:14Z |
---
language:
- en
---
This is a collection of so-vits-svc-3.0 models made by the Pony Preservation Project using audio clips taken from MLP:FiM. The top directory also contains pretrained models for so-vits-svc 3.0.
|
zslrmhb/Reinforce-Cartpole-v1
|
zslrmhb
| 2023-06-21T19:20:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T19:19:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.24 +/- 47.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kchen621/Reinforce-Pixelcopter-PLE-v0
|
kchen621
| 2023-06-21T19:00:37Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T16:04:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.30 +/- 32.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fedbor/quarto_modello
|
fedbor
| 2023-06-21T18:53:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-21T18:53:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
S3S3/ppo-LunarLander-v2.2
|
S3S3
| 2023-06-21T18:53:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T18:53:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.11 +/- 22.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
koreadaeil/my_awesome_qa_model
|
koreadaeil
| 2023-06-21T18:53:31Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-21T17:53:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: koreadaeil/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# koreadaeil/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.8709
- Validation Loss: 5.8422
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.9555 | 5.8683 | 0 |
| 5.9065 | 5.8422 | 1 |
| 5.8709 | 5.8422 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
DinaSalama/xlm-roberta-base-finetuned-panx-de
|
DinaSalama
| 2023-06-21T18:52:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-02-04T21:15:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.855602224997162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1351
- F1: 0.8556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1702 | 0.8117 |
| 0.213 | 2.0 | 526 | 0.1401 | 0.8349 |
| 0.213 | 3.0 | 789 | 0.1351 | 0.8556 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Felix92/doctr-dummy-torch-parseq
|
Felix92
| 2023-06-21T18:46:51Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-06-21T18:46:45Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
kejolong/empress
|
kejolong
| 2023-06-21T18:43:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T18:41:58Z |
---
license: creativeml-openrail-m
---
|
mariololo/ppo-lunarlander
|
mariololo
| 2023-06-21T18:41:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-21T18:40:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.99 +/- 66.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DunnBC22/codebert-base-mlm-Malicious_URLs
|
DunnBC22
| 2023-06-21T18:37:32Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T14:47:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: codebert-base-mlm-Malicious_URLs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-base-mlm-Malicious_URLs
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
- Accuracy: 0.7322
- Weighted f1: 0.6538
- Micro f1: 0.7322
- Macro f1: 0.4303
- Weighted recall: 0.7322
- Micro recall: 0.7322
- Macro recall: 0.4233
- Weighted precision: 0.6314
- Micro precision: 0.7322
- Macro precision: 0.6034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rd124/distilbert-base-uncased-finetuned-imdb
|
rd124
| 2023-06-21T18:32:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-21T18:22:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
shahafw/a2c-PandaReachDense-v2
|
shahafw
| 2023-06-21T18:32:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-10T21:59:12Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.76 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anujsahani01/finetuned_del
|
anujsahani01
| 2023-06-21T18:19:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-21T17:08:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned_del
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_del
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mr](https://huggingface.co/Helsinki-NLP/opus-mt-en-mr) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.