modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
simonycl/roberta-base-sst-2-16-13-30
|
simonycl
| 2023-08-09T00:53:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T00:45:22Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-sst-2-16-13-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-sst-2-16-13-30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6585
- Accuracy: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6934 | 0.5 |
| No log | 2.0 | 2 | 0.6933 | 0.5 |
| No log | 3.0 | 3 | 0.6933 | 0.5 |
| No log | 4.0 | 4 | 0.6929 | 0.5 |
| No log | 5.0 | 5 | 0.6925 | 0.5 |
| No log | 6.0 | 6 | 0.6920 | 0.5 |
| No log | 7.0 | 7 | 0.6914 | 0.5 |
| No log | 8.0 | 8 | 0.6909 | 0.6875 |
| No log | 9.0 | 9 | 0.6904 | 0.625 |
| 0.6897 | 10.0 | 10 | 0.6899 | 0.5 |
| 0.6897 | 11.0 | 11 | 0.6894 | 0.5 |
| 0.6897 | 12.0 | 12 | 0.6888 | 0.5 |
| 0.6897 | 13.0 | 13 | 0.6880 | 0.5312 |
| 0.6897 | 14.0 | 14 | 0.6871 | 0.5312 |
| 0.6897 | 15.0 | 15 | 0.6860 | 0.5312 |
| 0.6897 | 16.0 | 16 | 0.6849 | 0.6562 |
| 0.6897 | 17.0 | 17 | 0.6836 | 0.7188 |
| 0.6897 | 18.0 | 18 | 0.6821 | 0.6875 |
| 0.6897 | 19.0 | 19 | 0.6805 | 0.6875 |
| 0.6642 | 20.0 | 20 | 0.6788 | 0.6875 |
| 0.6642 | 21.0 | 21 | 0.6768 | 0.7188 |
| 0.6642 | 22.0 | 22 | 0.6746 | 0.7188 |
| 0.6642 | 23.0 | 23 | 0.6723 | 0.7188 |
| 0.6642 | 24.0 | 24 | 0.6696 | 0.7188 |
| 0.6642 | 25.0 | 25 | 0.6670 | 0.6875 |
| 0.6642 | 26.0 | 26 | 0.6644 | 0.6875 |
| 0.6642 | 27.0 | 27 | 0.6622 | 0.7188 |
| 0.6642 | 28.0 | 28 | 0.6604 | 0.7188 |
| 0.6642 | 29.0 | 29 | 0.6592 | 0.6875 |
| 0.5945 | 30.0 | 30 | 0.6585 | 0.6875 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
asenella/MMVAEPlus_beta_5_scale_False_seed_0
|
asenella
| 2023-08-09T00:19:54Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T16:50:57Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
RomyMy/dqn-SpaceInvadersNoFrameskip-v4
|
RomyMy
| 2023-08-09T00:15:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T00:15:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 657.50 +/- 342.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RomyMy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
thisiskeithkwan/cantomed7
|
thisiskeithkwan
| 2023-08-09T00:02:28Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-08T17:57:46Z |
---
language:
- yue
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper medium 1/10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium 1/10
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
C-Lo/masked-dataset
|
C-Lo
| 2023-08-08T23:45:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T23:41:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: masked-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
agustinl/reinforce-cartpole-v1
|
agustinl
| 2023-08-08T23:37:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T23:37:36Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
patonw/ppo-SnowballTarget
|
patonw
| 2023-08-08T23:13:34Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-08T23:13:29Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: patonw/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DaniyalMufti/ppo-Huggy
|
DaniyalMufti
| 2023-08-08T23:00:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-08T22:51:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AxlDM124/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iamnambiar/Reinforce-CartPole-v1
|
iamnambiar
| 2023-08-08T22:21:22Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T22:21:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sofia-todeschini/BioLinkBERT-LitCovid-v1.2.1
|
sofia-todeschini
| 2023-08-08T22:09:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T19:44:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BioLinkBERT-LitCovid-v1.2.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioLinkBERT-LitCovid-v1.2.1
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2205
- F1 micro: 0.9016
- F1 macro: 0.8505
- F1 weighted: 0.9044
- F1 samples: 0.9056
- Precision micro: 0.8545
- Precision macro: 0.7857
- Precision weighted: 0.8625
- Precision samples: 0.8862
- Recall micro: 0.9540
- Recall macro: 0.9431
- Recall weighted: 0.9540
- Recall samples: 0.9610
- Roc Auc: 0.9578
- Accuracy: 0.7211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
| 0.2839 | 1.0 | 2211 | 0.2205 | 0.9016 | 0.8505 | 0.9044 | 0.9056 | 0.8545 | 0.7857 | 0.8625 | 0.8862 | 0.9540 | 0.9431 | 0.9540 | 0.9610 | 0.9578 | 0.7211 |
| 0.1926 | 2.0 | 4422 | 0.2477 | 0.9134 | 0.8734 | 0.9147 | 0.9159 | 0.8770 | 0.8309 | 0.8808 | 0.9026 | 0.9529 | 0.9283 | 0.9529 | 0.9590 | 0.9607 | 0.7554 |
| 0.1341 | 3.0 | 6633 | 0.2667 | 0.9155 | 0.8749 | 0.9164 | 0.9170 | 0.8823 | 0.8328 | 0.8851 | 0.9059 | 0.9513 | 0.9251 | 0.9513 | 0.9569 | 0.9606 | 0.7642 |
| 0.1161 | 4.0 | 8844 | 0.2864 | 0.9188 | 0.8783 | 0.9195 | 0.9202 | 0.8938 | 0.8451 | 0.8958 | 0.9150 | 0.9452 | 0.9173 | 0.9452 | 0.9525 | 0.9593 | 0.7758 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Flaggoneer/snoozy-so-python
|
Flaggoneer
| 2023-08-08T22:00:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T22:00:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
divyeshrajpura/speecht5-finetuned-voxpopuli-nl
|
divyeshrajpura
| 2023-08-08T21:53:36Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-08T18:46:09Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-voxpopuli-nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-voxpopuli-nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5157 | 4.3 | 1000 | 0.4752 |
| 0.4994 | 8.6 | 2000 | 0.4619 |
| 0.5002 | 12.9 | 3000 | 0.4578 |
| 0.4968 | 17.2 | 4000 | 0.4556 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-08T21:49:48Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pico_ner",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-08T21:49:46Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task` for dmis-lab/biobert-v1.1
An [adapter](https://adapterhub.ml) for the `dmis-lab/biobert-v1.1` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("dmis-lab/biobert-v1.1")
adapter_name = model.load_adapter("reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
shtif/whisper-tiny-en
|
shtif
| 2023-08-08T21:46:36Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-08T20:13:59Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper Tiny - shtif
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33412042502951594
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny - shtif
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6315
- Wer Ortho: 0.3368
- Wer: 0.3341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0004 | 17.86 | 500 | 0.6315 | 0.3368 | 0.3341 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Josrf/ppo-SnowballTarget
|
Josrf
| 2023-08-08T21:42:32Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-08T20:16:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Josrf/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jordyvl/vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_hint_rand
|
jordyvl
| 2023-08-08T21:39:42Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-08T13:31:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_hint_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_hint_rand
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 75.5808
- Accuracy: 0.583
- Brier Loss: 0.7311
- Nll: 3.9633
- F1 Micro: 0.583
- F1 Macro: 0.5838
- Ece: 0.3399
- Aurc: 0.2128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 78.0119 | 0.1285 | 0.9098 | 6.7342 | 0.1285 | 0.0748 | 0.0496 | 0.7634 |
| 77.7969 | 2.0 | 500 | 77.3633 | 0.1595 | 0.8985 | 5.2942 | 0.1595 | 0.1038 | 0.0509 | 0.7216 |
| 77.7969 | 3.0 | 750 | 76.6773 | 0.2545 | 0.8551 | 3.9015 | 0.2545 | 0.2006 | 0.0741 | 0.5967 |
| 76.735 | 4.0 | 1000 | 76.1721 | 0.312 | 0.8123 | 3.4141 | 0.312 | 0.2785 | 0.0855 | 0.5018 |
| 76.735 | 5.0 | 1250 | 76.0027 | 0.3703 | 0.7573 | 3.2539 | 0.3703 | 0.3299 | 0.0764 | 0.4161 |
| 75.8262 | 6.0 | 1500 | 76.3256 | 0.4143 | 0.7290 | 3.1129 | 0.4143 | 0.3995 | 0.0835 | 0.3792 |
| 75.8262 | 7.0 | 1750 | 75.5753 | 0.4575 | 0.6838 | 2.8940 | 0.4575 | 0.4421 | 0.0595 | 0.3262 |
| 75.3656 | 8.0 | 2000 | 75.2875 | 0.475 | 0.6554 | 2.7996 | 0.4750 | 0.4596 | 0.0715 | 0.2976 |
| 75.3656 | 9.0 | 2250 | 75.3849 | 0.4833 | 0.6446 | 2.7232 | 0.4833 | 0.4523 | 0.0651 | 0.2885 |
| 75.0748 | 10.0 | 2500 | 75.3431 | 0.5172 | 0.6173 | 2.6664 | 0.5172 | 0.4905 | 0.0563 | 0.2606 |
| 75.0748 | 11.0 | 2750 | 75.0478 | 0.5357 | 0.5982 | 2.7014 | 0.5357 | 0.5207 | 0.0550 | 0.2384 |
| 74.821 | 12.0 | 3000 | 75.1324 | 0.5325 | 0.5973 | 2.6161 | 0.5325 | 0.5202 | 0.0569 | 0.2402 |
| 74.821 | 13.0 | 3250 | 75.0049 | 0.528 | 0.5996 | 2.6859 | 0.528 | 0.5157 | 0.0657 | 0.2408 |
| 74.613 | 14.0 | 3500 | 74.8702 | 0.5453 | 0.5881 | 2.7150 | 0.5453 | 0.5455 | 0.0661 | 0.2302 |
| 74.613 | 15.0 | 3750 | 74.8427 | 0.5595 | 0.5697 | 2.5605 | 0.5595 | 0.5479 | 0.0765 | 0.2117 |
| 74.421 | 16.0 | 4000 | 74.9157 | 0.5503 | 0.5829 | 2.7215 | 0.5503 | 0.5524 | 0.0765 | 0.2219 |
| 74.421 | 17.0 | 4250 | 74.9051 | 0.5633 | 0.5816 | 2.6715 | 0.5633 | 0.5577 | 0.0924 | 0.2186 |
| 74.2453 | 18.0 | 4500 | 74.9910 | 0.5733 | 0.5722 | 2.6963 | 0.5733 | 0.5717 | 0.0930 | 0.2107 |
| 74.2453 | 19.0 | 4750 | 74.8632 | 0.5575 | 0.5892 | 2.6981 | 0.5575 | 0.5549 | 0.1073 | 0.2198 |
| 74.0712 | 20.0 | 5000 | 74.8128 | 0.5757 | 0.5794 | 2.7227 | 0.5757 | 0.5697 | 0.1235 | 0.2083 |
| 74.0712 | 21.0 | 5250 | 74.7545 | 0.575 | 0.5794 | 2.7000 | 0.575 | 0.5700 | 0.1372 | 0.2015 |
| 73.9033 | 22.0 | 5500 | 74.7493 | 0.5737 | 0.5841 | 2.7996 | 0.5737 | 0.5806 | 0.1341 | 0.2073 |
| 73.9033 | 23.0 | 5750 | 74.7641 | 0.582 | 0.5831 | 2.7846 | 0.582 | 0.5780 | 0.1576 | 0.1985 |
| 73.7364 | 24.0 | 6000 | 74.8125 | 0.5807 | 0.5944 | 2.8725 | 0.5807 | 0.5767 | 0.1719 | 0.2015 |
| 73.7364 | 25.0 | 6250 | 74.9721 | 0.573 | 0.6132 | 2.9232 | 0.573 | 0.5734 | 0.1920 | 0.2086 |
| 73.5899 | 26.0 | 6500 | 74.8675 | 0.5823 | 0.6127 | 2.9200 | 0.5823 | 0.5788 | 0.1969 | 0.2059 |
| 73.5899 | 27.0 | 6750 | 74.9213 | 0.5723 | 0.6234 | 3.0482 | 0.5723 | 0.5717 | 0.2138 | 0.2085 |
| 73.4419 | 28.0 | 7000 | 74.9436 | 0.5815 | 0.6324 | 3.0789 | 0.5815 | 0.5803 | 0.2223 | 0.2058 |
| 73.4419 | 29.0 | 7250 | 74.8826 | 0.5747 | 0.6408 | 3.1380 | 0.5747 | 0.5711 | 0.2428 | 0.2044 |
| 73.3198 | 30.0 | 7500 | 75.0310 | 0.5633 | 0.6722 | 3.2517 | 0.5633 | 0.5639 | 0.2571 | 0.2226 |
| 73.3198 | 31.0 | 7750 | 75.0300 | 0.5577 | 0.6795 | 3.3520 | 0.5577 | 0.5627 | 0.2611 | 0.2255 |
| 73.2086 | 32.0 | 8000 | 74.9569 | 0.5793 | 0.6614 | 3.3345 | 0.5793 | 0.5829 | 0.2623 | 0.2070 |
| 73.2086 | 33.0 | 8250 | 75.1474 | 0.5655 | 0.6902 | 3.5319 | 0.5655 | 0.5656 | 0.2780 | 0.2260 |
| 73.1102 | 34.0 | 8500 | 75.1176 | 0.5697 | 0.6926 | 3.5011 | 0.5697 | 0.5685 | 0.2891 | 0.2127 |
| 73.1102 | 35.0 | 8750 | 75.2834 | 0.5673 | 0.7085 | 3.7150 | 0.5673 | 0.5688 | 0.2945 | 0.2210 |
| 73.0239 | 36.0 | 9000 | 75.2426 | 0.566 | 0.7101 | 3.6822 | 0.566 | 0.5679 | 0.3029 | 0.2200 |
| 73.0239 | 37.0 | 9250 | 75.3049 | 0.5743 | 0.7082 | 3.6300 | 0.5743 | 0.5758 | 0.3044 | 0.2185 |
| 72.9631 | 38.0 | 9500 | 75.3404 | 0.5695 | 0.7220 | 3.7386 | 0.5695 | 0.5741 | 0.3177 | 0.2210 |
| 72.9631 | 39.0 | 9750 | 75.4376 | 0.5775 | 0.7181 | 3.8412 | 0.5775 | 0.5784 | 0.3148 | 0.2191 |
| 72.9028 | 40.0 | 10000 | 75.4664 | 0.5777 | 0.7178 | 3.9272 | 0.5777 | 0.5775 | 0.3178 | 0.2233 |
| 72.9028 | 41.0 | 10250 | 75.5305 | 0.5737 | 0.7279 | 3.8240 | 0.5737 | 0.5761 | 0.3271 | 0.2215 |
| 72.8505 | 42.0 | 10500 | 75.4606 | 0.5783 | 0.7225 | 3.8401 | 0.5783 | 0.5805 | 0.3261 | 0.2156 |
| 72.8505 | 43.0 | 10750 | 75.5084 | 0.5793 | 0.7242 | 3.8552 | 0.5793 | 0.5791 | 0.3308 | 0.2115 |
| 72.8091 | 44.0 | 11000 | 75.4797 | 0.5817 | 0.7256 | 3.8946 | 0.5817 | 0.5825 | 0.3340 | 0.2112 |
| 72.8091 | 45.0 | 11250 | 75.5695 | 0.5793 | 0.7297 | 3.9742 | 0.5793 | 0.5809 | 0.3379 | 0.2150 |
| 72.7801 | 46.0 | 11500 | 75.5592 | 0.5807 | 0.7331 | 3.9445 | 0.5807 | 0.5830 | 0.3378 | 0.2151 |
| 72.7801 | 47.0 | 11750 | 75.5976 | 0.5833 | 0.7303 | 3.9669 | 0.5833 | 0.5840 | 0.3380 | 0.2145 |
| 72.7606 | 48.0 | 12000 | 75.5952 | 0.5833 | 0.7320 | 3.9813 | 0.5833 | 0.5847 | 0.3380 | 0.2148 |
| 72.7606 | 49.0 | 12250 | 75.5621 | 0.5843 | 0.7309 | 3.9491 | 0.5843 | 0.5851 | 0.3385 | 0.2127 |
| 72.7486 | 50.0 | 12500 | 75.5808 | 0.583 | 0.7311 | 3.9633 | 0.583 | 0.5838 | 0.3399 | 0.2128 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
luistakahashi/my-awesome-setfit-model
|
luistakahashi
| 2023-08-08T21:25:30Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-08T21:25:20Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luistakahashi/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
johnpaulbin/lora-trained-xl-colab
|
johnpaulbin
| 2023-08-08T21:00:05Z | 8 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-08T20:06:41Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - johnpaulbin/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
reginaboateng/pfeiffer_clinical_bert_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-08T20:32:46Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pico_ner",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-08T20:32:44Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/pfeiffer_clinical_bert_adapter_ner_pico_for_classification_task` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_clinical_bert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
AyoubChLin/roberta-large-bbc_news
|
AyoubChLin
| 2023-08-08T20:29:24Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:AyoubChLin/autotrain-data-roberta-large-bbc_news",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-12T19:09:36Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- AyoubChLin/autotrain-data-roberta-large-bbc_news
co2_eq_emissions:
emissions: 1.9843929651071104
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48943118458
- CO2 Emissions (in grams): 1.9844
## Validation Metrics
- Loss: 0.062
- Accuracy: 0.991
- Macro F1: 0.991
- Micro F1: 0.991
- Weighted F1: 0.991
- Macro Precision: 0.991
- Micro Precision: 0.991
- Weighted Precision: 0.991
- Macro Recall: 0.992
- Micro Recall: 0.991
- Weighted Recall: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-roberta-large-bbc_news-48943118458
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-roberta-large-bbc_news-48943118458", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-roberta-large-bbc_news-48943118458", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
reginaboateng/pfeiffer_SciBert_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-08T20:26:55Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:pico_ner",
"bert",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-08T20:26:52Z |
---
tags:
- adapter-transformers
- adapterhub:pico_ner
- bert
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/pfeiffer_SciBert_adapter_ner_pico_for_classification_task` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_SciBert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
mskov/falcon-7b-completion
|
mskov
| 2023-08-08T20:14:45Z | 5 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-26T20:08:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
psxjp5/mt5-small_test_35
|
psxjp5
| 2023-08-08T20:12:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T17:25:08Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: mt5-small_test_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small_test_35
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7383
- Rouge1: 43.9482
- Rouge2: 38.4156
- Rougel: 42.6232
- Rougelsum: 42.674
- Bleu: 33.3469
- Gen Len: 12.4725
- Meteor: 0.4016
- True negatives: 70.997
- False negatives: 11.8271
- Cosine Sim: 0.7532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 9
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | Meteor | True negatives | False negatives | Cosine Sim |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:------:|:--------------:|:---------------:|:----------:|
| 2.4524 | 1.0 | 175 | 0.9783 | 17.6419 | 14.587 | 17.1176 | 17.1329 | 6.1296 | 7.3271 | 0.1531 | 75.7704 | 59.8602 | 0.3786 |
| 1.1433 | 1.99 | 350 | 0.8448 | 38.9957 | 33.2414 | 37.7868 | 37.8653 | 27.5883 | 12.3274 | 0.3526 | 60.3625 | 17.236 | 0.6954 |
| 0.9381 | 2.99 | 525 | 0.8067 | 42.4146 | 36.3126 | 40.964 | 41.0427 | 31.5838 | 13.0716 | 0.3833 | 59.6375 | 11.1801 | 0.7425 |
| 0.8116 | 3.98 | 700 | 0.7712 | 43.8741 | 37.8446 | 42.3785 | 42.4778 | 33.1873 | 13.0574 | 0.3982 | 61.9335 | 9.5238 | 0.7586 |
| 0.7218 | 4.98 | 875 | 0.7439 | 43.1579 | 37.3057 | 41.7059 | 41.8024 | 32.5124 | 12.7853 | 0.3931 | 65.8006 | 11.2836 | 0.7498 |
| 0.6461 | 5.97 | 1050 | 0.7254 | 39.9226 | 34.552 | 38.7033 | 38.7665 | 27.9936 | 11.4675 | 0.3638 | 77.9456 | 18.5041 | 0.7003 |
| 0.5852 | 6.97 | 1225 | 0.7290 | 44.131 | 38.3527 | 42.7974 | 42.8549 | 33.6955 | 12.7811 | 0.4026 | 67.855 | 10.3778 | 0.7599 |
| 0.5421 | 7.96 | 1400 | 0.7248 | 44.5368 | 38.7443 | 43.2111 | 43.2976 | 34.1121 | 12.7875 | 0.4071 | 67.5529 | 10.4037 | 0.7637 |
| 0.5026 | 8.96 | 1575 | 0.7383 | 43.9482 | 38.4156 | 42.6232 | 42.674 | 33.3469 | 12.4725 | 0.4016 | 70.997 | 11.8271 | 0.7532 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BauyrjanQ/whisper-kk-speech2ner-b16-ms2000-s-cl
|
BauyrjanQ
| 2023-08-08T20:10:46Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-08T05:01:00Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk-speech2ner-b16-ms2000-s-cl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk-speech2ner-b16-ms2000-s-cl
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3907
- Wer: 358.0878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.364 | 0.18 | 800 | 0.4273 | 197.3714 |
| 1.1854 | 0.37 | 1600 | 0.3907 | 358.0878 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
azhang1212/angela_punc_shuffle_eval
|
azhang1212
| 2023-08-08T20:10:36Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-08T18:52:35Z |
---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: angela_punc_shuffle_eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angela_punc_shuffle_eval
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3164
- Precision: 0.4292
- Recall: 0.2191
- F1: 0.2901
- Accuracy: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1532 | 1.0 | 1283 | 0.2538 | 0.4284 | 0.1218 | 0.1897 | 0.9213 |
| 0.1309 | 2.0 | 2566 | 0.2672 | 0.4457 | 0.1419 | 0.2152 | 0.9218 |
| 0.1136 | 3.0 | 3849 | 0.2666 | 0.4340 | 0.1806 | 0.2551 | 0.9215 |
| 0.0904 | 4.0 | 5132 | 0.2973 | 0.4555 | 0.1957 | 0.2738 | 0.9235 |
| 0.0751 | 5.0 | 6415 | 0.3164 | 0.4292 | 0.2191 | 0.2901 | 0.9218 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
shubhamagarwal92/LunarLander-v2-ppo-unit8
|
shubhamagarwal92
| 2023-08-08T20:06:58Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T20:06:52Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -170.28 +/- 142.97
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'shubhamagarwal92/LunarLander-v2-ppo-unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
wesley7137/llama13b-wizardlm-uncensored-medicaldialogue
|
wesley7137
| 2023-08-08T19:42:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T19:40:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
mrm8488/mt5-base-ft-rf-02
|
mrm8488
| 2023-08-08T19:38:41Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T19:04:47Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: mt5-base-ft-rf-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-ft-rf-02
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 43.082 | 0.24 | 50 | 37.1069 |
| 34.6827 | 0.49 | 100 | 28.8296 |
| 21.0188 | 0.73 | 150 | 19.9344 |
| 18.3905 | 0.98 | 200 | 12.0120 |
| 14.342 | 1.22 | 250 | 9.2877 |
| 6.2116 | 1.46 | 300 | 6.1602 |
| 6.5474 | 1.71 | 350 | 4.6816 |
| 1.9222 | 1.95 | 400 | 2.6431 |
| 2.0579 | 2.2 | 450 | 1.2741 |
| 1.1028 | 2.44 | 500 | 0.9638 |
| 1.3341 | 2.68 | 550 | 0.8896 |
| 0.6531 | 2.93 | 600 | 0.8461 |
| 0.9805 | 3.17 | 650 | 0.7652 |
| 0.7167 | 3.41 | 700 | 0.7544 |
| 1.0224 | 3.66 | 750 | 0.7493 |
| 0.5367 | 3.9 | 800 | 0.7188 |
| 0.9352 | 4.15 | 850 | 0.6844 |
| 0.4927 | 4.39 | 900 | 0.6595 |
| 0.7141 | 4.63 | 950 | 0.6458 |
| 0.5773 | 4.88 | 1000 | 0.5911 |
| 0.4791 | 5.12 | 1050 | 0.5691 |
| 0.498 | 5.37 | 1100 | 0.5572 |
| 0.4306 | 5.61 | 1150 | 0.5315 |
| 0.334 | 5.85 | 1200 | 0.5123 |
| 0.3783 | 6.1 | 1250 | 0.4970 |
| 0.7719 | 6.34 | 1300 | 0.4774 |
| 0.3732 | 6.59 | 1350 | 0.4591 |
| 0.6203 | 6.83 | 1400 | 0.4482 |
| 0.4669 | 7.07 | 1450 | 0.4434 |
| 0.5568 | 7.32 | 1500 | 0.4307 |
| 0.6352 | 7.56 | 1550 | 0.4257 |
| 1.4137 | 7.8 | 1600 | 0.4229 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
usvsnsp/pythia-6.9b-rm-full-hh-rlhf
|
usvsnsp
| 2023-08-08T19:37:39Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-classification",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T19:28:53Z |
wandb run: https://wandb.ai/eleutherai/pythia-rlhf/runs/hlfywf2d
|
s3nh/stabilityai-stablecode-completion-alpha-3b-4k-GPTQ
|
s3nh
| 2023-08-08T19:22:24Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"en",
"arxiv:2104.09864",
"arxiv:1910.02054",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-08T19:17:30Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GPTQ Format model files for [This project](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b-4k/edit/main/README.md).
### inference
# `StableCode-Completion-Alpha-3B-4K`
## Model Description
`StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
## Usage
The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablecode-completion-alpha-3b-4k",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Code
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`.
### Training Dataset
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
### Training Procedure
The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k.
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
### Limitations and bias
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
## How to cite
```bibtex
@misc{StableCodeCompleteAlpha4K,
url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
title={Stable Code Complete Alpha},
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
```
|
MykolaGashevskyi/ppo-Huggy
|
MykolaGashevskyi
| 2023-08-08T19:10:59Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-08T19:10:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MykolaGashevskyi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Melonie/inpaint-lora
|
Melonie
| 2023-08-08T18:50:18Z | 0 | 0 | null |
[
"tensorboard",
"base_model:runwayml/stable-diffusion-inpainting",
"base_model:finetune:runwayml/stable-diffusion-inpainting",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-26T17:07:07Z |
---
license: bigscience-openrail-m
base_model: runwayml/stable-diffusion-inpainting
---
|
oegbo/bloomz-560m_prompt_tuning_casual_lm
|
oegbo
| 2023-08-08T18:44:08Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T18:44:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Jhandry/TrashDetection
|
Jhandry
| 2023-08-08T18:38:31Z | 0 | 0 | null |
[
"climate",
"es",
"license:openrail",
"region:us"
] | null | 2023-08-08T18:34:34Z |
---
license: openrail
language:
- es
tags:
- climate
---
|
kernelmachine/silo-pd-1.3b
|
kernelmachine
| 2023-08-08T18:37:47Z | 57 | 2 |
transformers
|
[
"transformers",
"pytorch",
"text-generation",
"openlm",
"silo",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T14:08:42Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- text-generation
- openlm
- silo
---
# Silo Language Models: Isolating Legal Risk in a Datastore
This is Silo-PD, first introduced in [Silo Language Models]() by researchers at University of Washington, UC Berkeley, and the Allen Institute for AI.
### NOTE: Dependencies
To use the model, you need to install a specific transformers fork:
```
pip install git+https://github.com/kernelmachine/transformers@openlm#egg=transformers
```
The model also depends on `xformers`, install via
```
pip install xformers
```
### Model Description
Silo-PD is a 1.3B parameter, decoder-only language model trained on data in the public domain from [the Open License Corpus (OLC)](https://huggingface.co/datasets/kernelmachine/open-license-corpus).
The model is based on the LLaMA architecture as implemented in (OpenLM)[].
The model is trained with 128 A100 GPUs across 16 nodes.
### Model and Training Hyperparameters
We follow the model architecture of LLaMa, and we use the GPT-NeoX-20B tokenizer, with 50432 BPE types.
During training, we use 2,048 token sequences that are packed across document boundaries, and we pre-pend a beginning-of-text token to every document.
We use weight decay of 0.1, the Adam optimizer with beta_2 of 0.95, 2,000 steps of warmup, with a cosine learning rate scheduler.
| Model | #L | #H | d_model | LR | Batch |
|--------|-----|-----|-------------|--------|--------|
| 1.3B | 24 | 16 | 2048 | 1e-3 | 2.6M |
### Training data
Specifically, it was trained on the following domain proportions (please see the OLC repository for more details on the data sources for each domain):
| Domain | Tokens (B) | % |
|-----------------|------------|-------|
| Legal | 27.1 | 86.2 |
| Books | 2.9 | 9.3 |
| Science | 1.2 | 3.8 |
| News | 0.2 | 0.7 |
| Total | 31.4 | 100.0 |
We train with early stopping for 60B tokens in total, for a total of 2 epochs of training over this subset
Since the distribution of OLC is highly skewed, we perform a simple upweighting scheme where we upsample all data that accounts for less than 5% of the corpus by a factor of 3x, which we found to work well after a sweep of different settings.
### Intended Uses and Limitations
This model can be used for prompting for evaluation of downstream tasks as well as text generation.
### How to use
You can use this model directly with a pipeline for text generation.
```python
from transformers import pipeline
generator = pipeline('text-generation', model="kernelmachine/silo-pd-1.3b", device='cuda')
generator("Hello")
[{'generated_text': 'Hello, my dear," said the old man, "I have been waiting for you\na long'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set do_sample to True.
```python
from transformers import pipeline, set_seed
set_seed(42)
generator = pipeline('text-generation', model="kernelmachine/silo-pd-1.3b", device='cuda', do_sample=True)
generator("Hello")
[{'generated_text': 'Hello, Mother," he called.\n\n"Hello, Son. Have you got a car'}]
```
### Limitations and Bias
Silo-PD inherits the biases and limitations of public domain data, which carry risks of toxic or otherwise unfair output, due to the prevalence of older copyright-expired text.
Silo-PD may also output personally identifiable information, because we did not filter that out of training data.
|
varshashaji/pet-dog-xzg
|
varshashaji
| 2023-08-08T18:30:46Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T18:26:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Pet-Dog-XZG Dreambooth model trained by varshashaji following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE109
Sample pictures of this concept:

|
ProteinLimay/falcon-assistant-2
|
ProteinLimay
| 2023-08-08T18:00:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T17:59:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl_cdip_entropy2_softmax
|
jordyvl
| 2023-08-08T17:33:36Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-01T15:38:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_cdip_entropy2_softmax
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip_entropy2_softmax
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8809
- Accuracy: 0.8968
- Brier Loss: 0.1890
- Nll: 1.1526
- F1 Micro: 0.8968
- F1 Macro: 0.8969
- Ece: 0.0923
- Aurc: 0.0205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.3547 | 1.0 | 2500 | 0.7036 | 0.8958 | 0.1806 | 0.9568 | 0.8958 | 0.8955 | 0.0815 | 0.0174 |
| 0.3049 | 2.0 | 5000 | 0.7030 | 0.8972 | 0.1784 | 1.0077 | 0.8972 | 0.8975 | 0.0825 | 0.0168 |
| 0.2103 | 3.0 | 7500 | 0.7465 | 0.8946 | 0.1857 | 1.0229 | 0.8946 | 0.8954 | 0.0883 | 0.0178 |
| 0.1548 | 4.0 | 10000 | 0.7640 | 0.8957 | 0.1860 | 1.0530 | 0.8957 | 0.8960 | 0.0893 | 0.0182 |
| 0.1077 | 5.0 | 12500 | 0.7964 | 0.8955 | 0.1877 | 1.0743 | 0.8955 | 0.8955 | 0.0903 | 0.0182 |
| 0.0742 | 6.0 | 15000 | 0.8253 | 0.8959 | 0.1887 | 1.0996 | 0.8959 | 0.8967 | 0.0919 | 0.0202 |
| 0.0495 | 7.0 | 17500 | 0.8505 | 0.8964 | 0.1884 | 1.1281 | 0.8964 | 0.8963 | 0.0920 | 0.0201 |
| 0.0352 | 8.0 | 20000 | 0.8645 | 0.8964 | 0.1895 | 1.1397 | 0.8964 | 0.8964 | 0.0931 | 0.0207 |
| 0.0235 | 9.0 | 22500 | 0.8733 | 0.8984 | 0.1876 | 1.1365 | 0.8984 | 0.8986 | 0.0914 | 0.0204 |
| 0.0176 | 10.0 | 25000 | 0.8809 | 0.8968 | 0.1890 | 1.1526 | 0.8968 | 0.8969 | 0.0923 | 0.0205 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LarryAIDraw/CocoliaV4-09
|
LarryAIDraw
| 2023-08-08T17:24:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-08T17:07:12Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/62870/cocolia-lora-honkai-star-rail
|
chunwoolee0/roberta-keti-air-korquad
|
chunwoolee0
| 2023-08-08T17:22:38Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:korquad",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-08T15:33:13Z |
---
base_model: klue/roberta-base
tags:
- generated_from_trainer
datasets:
- korquad
model-index:
- name: roberta-keti-air-korquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-keti-air-korquad
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the korquad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5572 | 1.0 | 2000 | 0.5212 |
| 0.3247 | 2.0 | 4000 | 0.5645 |
| 0.1786 | 3.0 | 6000 | 0.6731 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aswathys/my-pet-dog
|
aswathys
| 2023-08-08T17:20:36Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T17:16:35Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by aswathys following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
reginaboateng/pfeiffer_umls_relational_extraction_adapter_clinicalBERT
|
reginaboateng
| 2023-08-08T17:03:45Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-08-08T16:45:32Z |
---
tags:
- adapter-transformers
- adapterhub:umls
- bert
datasets:
- umls
---
# Adapter `reginaboateng/pfeiffer_umls_relational_extraction_adapter_clinicalBERT` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_umls_relational_extraction_adapter_clinicalBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
KnutJaegersberg/summary-quality-judge-WizardLM-Uncensored-40b-lora
|
KnutJaegersberg
| 2023-08-08T17:00:03Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-08T16:56:53Z |
---
license: mit
---
Prompt format
You are an expert for summarization. Below is an article, followed by a summary. Follow the instruction.
/### Instruction:
You evaluate the summary below on text coherence, text fluency, text informativeness and text relevance. You respond only with good or bad.
Article:
U.S. stocks fell on Tuesday, after a near 300-point rally on the Dow evaporated amid falling commodity prices and worries Germany would throw cold water on the European Central Bank taking additional steps to bolster the region's economy.</p><p>"We've gone from day-to-day volatility to intraday volatility," Mark Luschini, chief market strategist at Janney Montgomery Scott, said.</p><p>"A progression of events caused this, in the context of a market that is scared anyway, with the VIX trading above 20," Peter Boockvar, chief market analyst at the Lindsey Group, said of the market's about face.</p><p>"Copper prices are falling out of bed, down 5 percent, that tells you something about global growth, that something is not right," Boockvar added.</p><p>Reports from overseas that had Germany downplaying the notion of further quantitative easing by the ECB helped push the market lower, Art Hogan, chief market strategist at Wunderlich Securities, said.</p><p>"There are rumors that Germany is botching quantitative easing, and the market is looking for QE to come out on Jan. 22. It's a non-trivial worry, when you're talking about a eurozone that in the aggregate is almost the size of the U.S. economy," Luschini said.</p><p>"And, there's continuation of pressure from crude prices; investors are still trying to ascertain if lower energy prices are good or bad for stocks," Hogan said.</p><p>KB Home led declines among homebuilders after it projected a "significant" drop in gross margins in the current quarter; Apple shares surged after Credit Suisse upgraded the supplier of consumer technology to outperform from neutral.</p><p>With the fourth-quarter earnings season started, investors are on the lookout for the the effect of crude's decline on the S&P 500's collective bottom line, with oil prices on Tuesday falling to near six-year lows as a major OPEC producer stuck to the cartel's decision not to reduce output.</p><p>"Major parts of the global economy are likely to be economic black holes this year, and likely to put downward pressure on optimistic earnings estimates for the first half if not all of 2015," Jim Russell, portfolio manager at Bahl & Gaynor, said.</p><p>"Aluminum production is an energy hog, so the cheaper oil prices definitely helped Alcoa," said Chris Gaffney, senior market strategist at Everbank.</p><p>Still, Alcoa's initial gains evaporated, with the aluminum producer turning lower after reporting better-than-expected results late Monday.</p><p>Read MoreFederated's Orlando: 4Q earnings bears 'smoking dope'
Summary:
u.s. stocks fell on tuesday after a near 300-point rally on the dow evaporated amid falling commodity prices and worries germany would throw cold water on the european central bank taking additional steps to bolster the region 's economy .
/### Response:
good
|
reginaboateng/compacter_umls_relational_extraction_adapter_SciBERT
|
reginaboateng
| 2023-08-08T16:57:26Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-08-08T16:57:24Z |
---
tags:
- adapterhub:umls
- adapter-transformers
- bert
datasets:
- umls
---
# Adapter `reginaboateng/compacter_umls_relational_extraction_adapter_SciBERT` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/compacter_umls_relational_extraction_adapter_SciBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Phoenixsymbol/falcon-7b-instruct-ft-adapters-v2
|
Phoenixsymbol
| 2023-08-08T16:49:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T16:31:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
reginaboateng/pfeiffer_umls_relational_extraction_adapter_SciBERT
|
reginaboateng
| 2023-08-08T16:46:53Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-08-08T16:46:51Z |
---
tags:
- adapterhub:umls
- bert
- adapter-transformers
datasets:
- umls
---
# Adapter `reginaboateng/pfeiffer_umls_relational_extraction_adapter_SciBERT` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_umls_relational_extraction_adapter_SciBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
reginaboateng/pfeiffer_umls_relational_extraction_adapter_BioBERT
|
reginaboateng
| 2023-08-08T16:42:57Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-08-08T16:42:56Z |
---
tags:
- adapterhub:umls
- adapter-transformers
- bert
datasets:
- umls
---
# Adapter `reginaboateng/pfeiffer_umls_relational_extraction_adapter_BioBERT` for dmis-lab/biobert-v1.1
An [adapter](https://adapterhub.ml) for the `dmis-lab/biobert-v1.1` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("dmis-lab/biobert-v1.1")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_umls_relational_extraction_adapter_BioBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged
|
TokenBender
| 2023-08-08T16:42:04Z | 19 | 69 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T21:44:36Z |
---
### Overview:
description:
This is a llama2 7B HF chat model fine-tuned on 122k code instructions. In my early experiments it seems to be doing very well.
additional_info:
It's a bottom of the barrel model 😂 but after quantization it can be
valuable for sure. It definitely proves that a 7B can be useful for boilerplate
code stuff though.
### Plans:
next_steps: "I've a few things in mind and after that this will be more valuable."
tasks:
- name: "I'll quantize these"
timeline: "Possibly tonight or tomorrow in the day"
result: "Then it can be run locally with 4G ram."
- name: "I've used alpaca style instruction tuning"
improvement: |
I'll switch to llama2 style [INST]<<SYS>> style and see if
it improves anything.
- name: "HumanEval report and checking for any training data leaks"
- attempt: "I'll try 8k context via RoPE enhancement"
hypothesis: "Let's see if that degrades performance or not."
commercial_use: |
So far I think this can be used commercially but this is a adapter on Meta's llama2 with
some gating issues so that is there.
contact_info: "If you find any issues or want to just holler at me, you can reach out to me - https://twitter.com/4evaBehindSOTA"
### Library:
name: "peft"
### Training procedure:
quantization_config:
load_in_8bit: False
load_in_4bit: True
llm_int8_threshold: 6.0
llm_int8_skip_modules: None
llm_int8_enable_fp32_cpu_offload: False
llm_int8_has_fp16_weight: False
bnb_4bit_quant_type: "nf4"
bnb_4bit_use_double_quant: False
bnb_4bit_compute_dtype: "float16"
### Framework versions:
PEFT: "0.5.0.dev0"
|
reginaboateng/pfeiffer_umls_relational_extraction_adapter_PubMedBERT
|
reginaboateng
| 2023-08-08T16:41:42Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:umls",
"dataset:umls",
"region:us"
] | null | 2023-08-08T16:41:39Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:umls
datasets:
- umls
---
# Adapter `reginaboateng/pfeiffer_umls_relational_extraction_adapter_PubMedBERT` for microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
An [adapter](https://adapterhub.ml) for the `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_umls_relational_extraction_adapter_PubMedBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
hoangphu7122002ai/MRC_v1
|
hoangphu7122002ai
| 2023-08-08T16:35:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T05:36:51Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: MRC_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MRC_v1
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
WineDuck/blip2_opt_2_7b_rsvg
|
WineDuck
| 2023-08-08T16:33:57Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T16:33:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0
|
weiren119/traditional_chinese_qlora_llama2_merged
|
weiren119
| 2023-08-08T16:28:30Z | 0 | 9 |
peft
|
[
"peft",
"safetensors",
"llama",
"llama2",
"qLoRa",
"traditional_chinese",
"alpaca",
"text-generation-inference",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2023-08-08T12:06:23Z |
---
library_name: peft
license: apache-2.0
tags:
- llama2
- qLoRa
- traditional_chinese
- alpaca
- text-generation-inference
language:
- zh
---
# Traditional Chinese Llama2
- Github repo: https://github.com/MIBlue119/traditional_chinese_llama2/
- This is a practice to finetune Llama2 on traditional chinese instruction dataset at Llama2 chat model.
- Use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours.
Thanks for these references:
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
- [Chinese Llama 2 7B train.py](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/train.py)
- [Load the pretrained model in 4-bit precision and Set training with LoRA according to hf's trl lib](https://github.com/lvwerra/trl/blob/main/examples/scripts/sft_trainer.py): QLoRA finetuning
## Resources
- traditional chinese qlora finetuned Llama2 merge model: [weiren119/traditional_chinese_qlora_llama2_merged](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged)
- traditional chinese qlora adapter model: [weiren119/traditional_chinese_qlora_llama2](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2)
## Online Demo
- [Run the qlora finetuned model at colab](https://colab.research.google.com/drive/1OYXvhY-8KjEDaGhOLrJe4omjtFgOWjy1?usp=sharing): May need colab pro or colab pro+
## Use which pretrained model
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
## Usage
### Installation dependencies
```
$pip install transformers torch peft
```
#### Run the inference
```
import transformers
import torch
from transformers import AutoTokenizer, TextStreamer
# Use the same tokenizer from the source model
model_id="weiren119/traditional_chinese_qlora_llama2_merged"
tokenizer = AutoTokenizer.from_pretrained(original_model_path, use_fast=False)
# Load fine-tuned model, you can replace this with your own model
model = AutoPeftModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=model_id.endswith("4bit"),
torch_dtype=torch.float16,
device_map='auto'
)
system_prompt = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
def get_prompt(message: str, chat_history: list[tuple[str, str]]) -> str:
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
for user_input, response in chat_history:
texts.append(f'{user_input.strip()} [/INST] {response.strip()} </s><s> [INST] ')
texts.append(f'{message.strip()} [/INST]')
return ''.join(texts)
print ("="*100)
print ("-"*80)
print ("Have a try!")
s = ''
chat_history = []
while True:
s = input("User: ")
if s != '':
prompt = get_prompt(s, chat_history)
print ('Answer:')
tokens = tokenizer(prompt, return_tensors='pt').input_ids
#generate_ids = model.generate(tokens.cuda(), max_new_tokens=4096, streamer=streamer)
generate_ids = model.generate(input_ids=tokens.cuda(), max_new_tokens=4096, streamer=streamer)
output = tokenizer.decode(generate_ids[0, len(tokens[0]):-1]).strip()
chat_history.append([s, output])
print ('-'*80)
```
|
Phoenixsymbol/falcon-7b-instruct-ft-adapters
|
Phoenixsymbol
| 2023-08-08T16:28:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T21:37:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
dfalvearg/ppo-SnowballTarget
|
dfalvearg
| 2023-08-08T16:26:17Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-08T16:26:10Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dfalvearg/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
reginaboateng/SciBert_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-08T16:24:31Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pico_ner",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-08T16:24:28Z |
---
tags:
- adapter-transformers
- bert
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/SciBert_adapter_ner_pico_for_classification_task` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/SciBert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
weav-geng/llama2-qlora-finetuned-resume-v9
|
weav-geng
| 2023-08-08T16:20:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T16:19:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
nokotin/rl_course_vizdoom_health_gathering_supreme
|
nokotin
| 2023-08-08T16:13:39Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T14:30:44Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.48 +/- 2.87
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r nokotin/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
aurioldegbelo/slm-segformer-080823
|
aurioldegbelo
| 2023-08-08T16:02:10Z | 31 | 0 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-08-08T02:27:32Z |
---
license: mit
base_model: nvidia/mit-b0
tags:
- generated_from_keras_callback
model-index:
- name: slm-segformer-080823
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# slm-segformer-080823
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0357
- Validation Loss: 0.0383
- Validation Mean Iou: 0.8453
- Validation Mean Accuracy: 0.9366
- Validation Overall Accuracy: 0.9869
- Validation Per Category Iou: [0.98646921 0.70414361]
- Validation Per Category Accuracy: [0.99072207 0.88237991]
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Per Category Iou | Validation Per Category Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------------:|:-----:|
| 0.4798 | 0.1807 | 0.6747 | 0.7770 | 0.9674 | [0.96669254 0.38268484] | [0.98185208 0.57215982] | 0 |
| 0.1552 | 0.1046 | 0.7352 | 0.7991 | 0.9779 | [0.97745298 0.49298956] | [0.99154204 0.60674898] | 1 |
| 0.0981 | 0.1042 | 0.7744 | 0.9090 | 0.9779 | [0.97719564 0.5715319 ] | [0.98310851 0.8349177 ] | 2 |
| 0.0744 | 0.0978 | 0.7876 | 0.9431 | 0.9784 | [0.97773288 0.59755377] | [0.98113179 0.90515736] | 3 |
| 0.0611 | 0.0728 | 0.8224 | 0.9456 | 0.9836 | [0.98310869 0.66170563] | [0.98654807 0.90455283] | 4 |
| 0.0513 | 0.0531 | 0.8330 | 0.9282 | 0.9856 | [0.98518512 0.68084932] | [0.99000668 0.86647783] | 5 |
| 0.0469 | 0.0514 | 0.8326 | 0.9460 | 0.9850 | [0.98451475 0.68075519] | [0.9879771 0.90405278] | 6 |
| 0.0413 | 0.0406 | 0.8452 | 0.9360 | 0.9869 | [0.9864742 0.70392259] | [0.99077125 0.88115845] | 7 |
| 0.0385 | 0.0412 | 0.8495 | 0.9309 | 0.9875 | [0.98715291 0.71182272] | [0.99186047 0.86989475] | 8 |
| 0.0357 | 0.0383 | 0.8453 | 0.9366 | 0.9869 | [0.98646921 0.70414361] | [0.99072207 0.88237991] | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
dgalik/emoBank_test2_epoch20_batch16
|
dgalik
| 2023-08-08T15:56:17Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-08T15:50:29Z |
---
base_model: ''
tags:
- generated_from_trainer
model-index:
- name: emoBank_test2_epoch20_batch16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emoBank_test2_epoch20_batch16
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Mse V: 0.1312
- Mse A: 0.0651
- Mse D: 0.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TinToTin/taxi-v3-q-table-training
|
TinToTin
| 2023-08-08T15:52:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T15:52:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-q-table-training
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Thineshan/taxi-v3-q-table-training", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TonyTomyGeorge/my-pet-dog-csd
|
TonyTomyGeorge
| 2023-08-08T15:47:54Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T15:43:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-csd Dreambooth model trained by TonyTomyGeorge following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET46
Sample pictures of this concept:
.jpg)
|
Meenuantony/my-pet-dog-xzg
|
Meenuantony
| 2023-08-08T15:31:08Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T15:27:10Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzg Dreambooth model trained by Meenuantony following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE105
Sample pictures of this concept:

|
ad019el/tamasheq-1
|
ad019el
| 2023-08-08T15:25:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-19T17:13:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tamasheq-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamasheq-1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SaudxInu/q-Taxi-v3
|
SaudxInu
| 2023-08-08T15:15:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T15:15:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SaudxInu/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stabilityai/stablecode-completion-alpha-3b
|
stabilityai
| 2023-08-08T15:11:56Z | 248 | 116 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"code",
"dataset:bigcode/starcoderdata",
"arxiv:2104.09864",
"arxiv:1910.02054",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T15:43:41Z |
---
datasets:
- bigcode/starcoderdata
language:
- code
tags:
- causal-lm
model-index:
- name: stabilityai/stablecode-completion-alpha-3b
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.2018
verified: false
- name: pass@10
type: pass@10
value: 0.3375
verified: false
license: apache-2.0
---
# `StableCode-Completion-Alpha-3B`
## Model Description
`StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that were the top used languages based on the 2023 stackoverflow developer survey.
## Usage
The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
Get started generating code with `StableCode-Completion-Alpha-3B` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablecode-completion-alpha-3b",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Code
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 16384 |
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableCode-Completion-Alpha-3B` is pre-trained using a multi-stage context length extension schedule following similar work ([Nijkamp et al. 2023](https://blog.salesforceairesearch.com/xgen/)); first pre-training at a context length of 4096 for 300 billion tokens, then fine-tuning at a context length of 16384 for another 200B tokens.
### Training Dataset
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset which increased the average token per sample to 20k.
### Training Procedure
The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the StarCoder tokenizer with a vocabulary size of 49k.
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
StableCode-Completion-Alpha-3B independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
### Limitations and bias
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
## How to cite
```bibtex
@misc{StableCodeCompleteAlpha,
url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b)},
title={Stable Code Complete Alpha},
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
```
|
mrmrob003/rl_course_vizdoom_health_gathering_supreme
|
mrmrob003
| 2023-08-08T15:11:24Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T15:01:29Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.40 +/- 6.46
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mrmrob003/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
am-infoweb/MRR_QA_15K_UNTIL_2_08_FINRTUNED_ON_21_7_MODEL
|
am-infoweb
| 2023-08-08T15:10:06Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:am-infoweb/MRR-Latest-21-7",
"base_model:finetune:am-infoweb/MRR-Latest-21-7",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-08T13:11:24Z |
---
license: apache-2.0
base_model: am-infoweb/MRR-Latest-21-7
tags:
- generated_from_trainer
model-index:
- name: MRR_QA_15K_UNTIL_2_08_FINRTUNED_ON_21_7_MODEL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MRR_QA_15K_UNTIL_2_08_FINRTUNED_ON_21_7_MODEL
This model is a fine-tuned version of [am-infoweb/MRR-Latest-21-7](https://huggingface.co/am-infoweb/MRR-Latest-21-7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.03 | 1.0 | 11594 | 1.0591 |
| 0.9616 | 2.0 | 23188 | 0.8061 |
| 0.8357 | 3.0 | 34782 | 0.9515 |
| 0.7217 | 4.0 | 46376 | 0.8091 |
| 0.6558 | 5.0 | 57970 | 0.8454 |
| 0.6175 | 6.0 | 69564 | 0.7826 |
| 0.4479 | 7.0 | 81158 | 0.9225 |
| 0.3561 | 8.0 | 92752 | 0.8987 |
| 0.3635 | 9.0 | 104346 | 0.9856 |
| 0.3647 | 10.0 | 115940 | 1.0308 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
weiren119/traditional_chinese_qlora_llama2
|
weiren119
| 2023-08-08T15:05:42Z | 5 | 2 |
peft
|
[
"peft",
"llama2",
"qLoRa",
"traditional_chinese",
"alpaca",
"adapter",
"license:apache-2.0",
"region:us"
] | null | 2023-08-05T01:42:13Z |
---
library_name: peft
license: apache-2.0
tags:
- llama2
- qLoRa
- traditional_chinese
- alpaca
- adapter
---
# Traditional Chinese Llama2
- github repo: https://github.com/MIBlue119/traditional_chinese_llama2/
- Practice to finetune Llama2 on traditional chinese instruction dataset at Llama2 chat model.
I use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours.
Thanks for these references:
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
- [Chinese Llama 2 7B train.py](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/train.py)
- [Load the pretrained model in 4-bit precision and Set training with LoRA according to hf's trl lib](https://github.com/lvwerra/trl/blob/main/examples/scripts/sft_trainer.py): QLoRA finetuning
## Resources
- traditional chinese qlora finetuned Llama2 merge model: [weiren119/traditional_chinese_qlora_llama2_merged](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged)
- traditional chinese qlora adapter model: [weiren119/traditional_chinese_qlora_llama2](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2)
## Online Demo
- [Run the qlora finetuned model at colab](https://colab.research.google.com/drive/1OYXvhY-8KjEDaGhOLrJe4omjtFgOWjy1?usp=sharing): May need colab pro or colab pro+
## Notice
the repois model adpater
if you want to use the merged checkpoint(adapter+original model) repo: https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged
## Use which pretrained model
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
## Usage
### Installation dependencies
```
$pip install transformers torch peft
```
#### Run the inference
```
import transformers
import torch
from transformers import AutoTokenizer, TextStreamer
from peft import AutoPeftModelForCausalLM
# Use the same tokenizer from the source model
original_model_path="NousResearch/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(original_model_path, use_fast=False)
# Load qlora fine-tuned model, you can replace this with your own model
qlora_model_path = "weiren119/traditional_chinese_qlora_llama2"
model = AutoPeftModelForCausalLM.from_pretrained(
qlora_model_path,
load_in_4bit=qlora_model_path.endswith("4bit"),
torch_dtype=torch.float16,
device_map='auto'
)
system_prompt = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
def get_prompt(message: str, chat_history: list[tuple[str, str]]) -> str:
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
for user_input, response in chat_history:
texts.append(f'{user_input.strip()} [/INST] {response.strip()} </s><s> [INST] ')
texts.append(f'{message.strip()} [/INST]')
return ''.join(texts)
print ("="*100)
print ("-"*80)
print ("Have a try!")
s = ''
chat_history = []
while True:
s = input("User: ")
if s != '':
prompt = get_prompt(s, chat_history)
print ('Answer:')
tokens = tokenizer(prompt, return_tensors='pt').input_ids
#generate_ids = model.generate(tokens.cuda(), max_new_tokens=4096, streamer=streamer)
generate_ids = model.generate(input_ids=tokens.cuda(), max_new_tokens=4096, streamer=streamer)
output = tokenizer.decode(generate_ids[0, len(tokens[0]):-1]).strip()
chat_history.append([s, output])
print ('-'*80)
```
|
TheLastBen/William_Eggleston_Style_SDXL
|
TheLastBen
| 2023-08-08T15:02:40Z | 1,949 | 22 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-30T19:13:11Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: william eggleston
widget:
- text: by william eggleston
---
### William Eggleston Photography Style
#### SDXL LoRA by TheLastBen
#### Prompts to start with :
a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful
closeup portrait of a woman in a kitchen by william eggleston, beautiful, sunrays, sunlight
a beautiful view through a kitchen window, car, by william eggleston, sunlight
---
Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
ComfyUI seems to give better results than A1111, but that's just me.
#### Sample pictures:
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
|
KallistiTMR/llama-2-7b-chat-wiz-k16-14
|
KallistiTMR
| 2023-08-08T14:57:04Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T04:17:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
ototadana/occlusion-aware-face-segmentation
|
ototadana
| 2023-08-08T14:55:43Z | 0 | 2 | null |
[
"mmsegmentation",
"face",
"occlusion",
"image-segmentation",
"license:cc0-1.0",
"region:us"
] |
image-segmentation
| 2023-08-08T13:42:08Z |
---
license: cc0-1.0
pipeline_tag: image-segmentation
tags:
- mmsegmentation
- face
- occlusion
---
# Occlusion-aware face segmentation
A model for occlusion-aware face segmentation.
This model was created following the procedures in [mmsegmentation](https://mmsegmentation.readthedocs.io/en/latest/)'s PR [[Feature] Support Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets #2194](https://github.com/open-mmlab/mmsegmentation/pull/2194).
For more information, see:
- https://github.com/open-mmlab/mmsegmentation/pull/2194/files
- https://github.com/kennyvoo/face-occlusion-generation
### How to use
Use with [mmsegmentation](https://mmsegmentation.readthedocs.io/en/latest/get_started.html).
Example:
```python
from mmseg.apis import inference_model, init_model, show_result_pyplot
import mmcv
config_file = 'deeplabv3plus_r101_512x512_face-occlusion.py'
checkpoint_file = 'deeplabv3plus_r101_512x512_face-occlusion-93ec6695.pth'
model = init_model(config_file, checkpoint_file, device='cuda:0')
img = 'face-image.png'
result = inference_model(model, img)
show_result_pyplot(model, img, result, show=True, out_file='result.jpg', opacity=0.5)
```
|
IIIT-L/muril-base-cased-finetuned-code-mixed-DS
|
IIIT-L
| 2023-08-08T14:47:32Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-28T15:10:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: muril-base-cased-finetuned-code-mixed-DS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muril-base-cased-finetuned-code-mixed-DS
This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9319
- Accuracy: 0.6982
- Precision: 0.6327
- Recall: 0.6314
- F1: 0.6320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0542 | 1.98 | 248 | 0.9786 | 0.5976 | 0.3936 | 0.5454 | 0.4330 |
| 0.9307 | 3.97 | 496 | 0.8836 | 0.5996 | 0.4072 | 0.5604 | 0.4399 |
| 0.8323 | 5.95 | 744 | 0.8266 | 0.5996 | 0.5508 | 0.5720 | 0.4527 |
| 0.7554 | 7.94 | 992 | 0.8006 | 0.6318 | 0.5601 | 0.5838 | 0.5232 |
| 0.6821 | 9.92 | 1240 | 0.8777 | 0.6740 | 0.5929 | 0.5875 | 0.5836 |
| 0.6173 | 11.9 | 1488 | 0.8389 | 0.6640 | 0.5918 | 0.6031 | 0.5881 |
| 0.5552 | 13.89 | 1736 | 0.9003 | 0.6962 | 0.6240 | 0.6160 | 0.6191 |
| 0.4932 | 15.87 | 1984 | 0.8979 | 0.6982 | 0.6266 | 0.6231 | 0.6245 |
| 0.4446 | 17.86 | 2232 | 0.9104 | 0.7002 | 0.6310 | 0.6290 | 0.6298 |
| 0.4084 | 19.84 | 2480 | 0.9284 | 0.7002 | 0.6278 | 0.6255 | 0.6264 |
| 0.3763 | 21.82 | 2728 | 0.9228 | 0.7082 | 0.6436 | 0.6380 | 0.6398 |
| 0.3575 | 23.81 | 2976 | 0.9319 | 0.6982 | 0.6327 | 0.6314 | 0.6320 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
IIIT-L/xlm-roberta-base-finetuned-code-mixed-DS
|
IIIT-L
| 2023-08-08T14:46:30Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-07T21:54:54Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-base-finetuned-code-mixed-DS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-code-mixed-DS
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8266
- Accuracy: 0.6318
- Precision: 0.5781
- Recall: 0.5978
- F1: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.932923543227153e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0602 | 1.0 | 248 | 1.0280 | 0.5211 | 0.4095 | 0.4557 | 0.3912 |
| 0.9741 | 1.99 | 496 | 0.9318 | 0.5533 | 0.4758 | 0.5002 | 0.4415 |
| 0.8585 | 2.99 | 744 | 0.8585 | 0.6076 | 0.5539 | 0.5731 | 0.5353 |
| 0.7293 | 3.98 | 992 | 0.8266 | 0.6318 | 0.5781 | 0.5978 | 0.5677 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vimal52/t5_base_finetune_QLoRa_v3.0
|
vimal52
| 2023-08-08T14:37:37Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-08-08T11:43:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jwb220/Taxi-v3
|
jwb220
| 2023-08-08T14:34:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T14:34:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jwb220/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Muhammadreza/mann-e-artistic-4
|
Muhammadreza
| 2023-08-08T14:17:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T14:14:01Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_artistic-4 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
khsuniv201/q_Taxi-v3
|
khsuniv201
| 2023-08-08T14:16:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T14:15:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="khsuniv201/q_Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mrmrob003/ppo-LunarLander-v2-from-scratch
|
mrmrob003
| 2023-08-08T14:11:55Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T13:03:37Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 119.81 +/- 25.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
RIOLITE/products_matching_aumet_fine_tune_2023-08-08
|
RIOLITE
| 2023-08-08T14:03:39Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-08T07:03:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cuixing/textual_inversion_cat-toytest08082136
|
cuixing
| 2023-08-08T13:58:27Z | 14 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T13:36:52Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - cuixing/textual_inversion_cat-toytest08082136
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
reginaboateng/umls_relational_extraction_adapter_SciBERT
|
reginaboateng
| 2023-08-08T13:57:57Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:umls",
"dataset:umls",
"region:us"
] | null | 2023-08-08T13:57:53Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:umls
datasets:
- umls
---
# Adapter `reginaboateng/umls_relational_extraction_adapter_SciBERT` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/umls_relational_extraction_adapter_SciBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
player1537/Bloom-560m-LoRA-trained-on-Dolphin
|
player1537
| 2023-08-08T13:55:48Z | 13 | 0 |
peft
|
[
"peft",
"tensorboard",
"en",
"dataset:player1537/Bloom-560m-trained-on-Dolphin",
"dataset:ehartford/dolphin",
"license:wtfpl",
"region:us"
] | null | 2023-07-30T20:25:57Z |
---
library_name: peft
license: wtfpl
datasets:
- player1537/Bloom-560m-trained-on-Dolphin
- ehartford/dolphin
language:
- en
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
AljoSt/ppo-LunarLander-v2
|
AljoSt
| 2023-08-08T13:54:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T13:53:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.29 +/- 13.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
reginaboateng/umls_relational_extraction_adapter_PubMedBERT
|
reginaboateng
| 2023-08-08T13:53:13Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:umls",
"dataset:umls",
"region:us"
] | null | 2023-08-08T13:53:08Z |
---
tags:
- bert
- adapterhub:umls
- adapter-transformers
datasets:
- umls
---
# Adapter `reginaboateng/umls_relational_extraction_adapter_PubMedBERT` for microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
An [adapter](https://adapterhub.ml) for the `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
adapter_name = model.load_adapter("reginaboateng/umls_relational_extraction_adapter_PubMedBERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
YassineKader/faster-whisper-small-haitian
|
YassineKader
| 2023-08-08T13:52:40Z | 6 | 1 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"ht",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T21:02:17Z |
---
language:
- ht
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper small model for CTranslate2
This repository contains the conversion of [YassineKader/whisper-small-haitian](https://huggingface.co/YassineKader/whisper-small-haitian) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```git
#clone the repo
git clone https://huggingface.co/YassineKader/faster-whisper-small-haitian
```
```python
import ctranslate2
import librosa
import transformers
from datetime import datetime
# Load and resample the audio file.
audio, _ = librosa.load("audio1.wav", sr=16000, mono=True)
# Compute the features of the first 30 seconds of audio.
processor = transformers.WhisperProcessor.from_pretrained("YassineKader/whisper-small-haitian")
inputs = processor(audio, return_tensors="np", sampling_rate=16000)
features = ctranslate2.StorageView.from_array(inputs.input_features)
# Load the model on CPU.
model = ctranslate2.models.Whisper("faster-whisper-small-haitian")
# Detect the language.
results = model.detect_language(features)
language, probability = results[0][0]
print("Detected language %s with probability %f" % (language, probability))
print(datetime.now())
# Describe the task in the prompt.
# See the prompt format in https://github.com/openai/whisper.
prompt = processor.tokenizer.convert_tokens_to_ids(
[
"<|startoftranscript|>",
language,
"<|transcribe|>",
"<|notimestamps|>", # Remove this token to generate timestamps.
]
)
# Run generation for the 30-second window.
results = model.generate(features, [prompt])
transcription = processor.decode(results[0].sequences_ids[0])
print(datetime.now())
print(transcription)
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model YassineKader/whisper-small-haitian --output_dir faster-whisper-small-ht --copy_files tokenizer.json --quantization float32
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small).**
|
peterandrew987/modified
|
peterandrew987
| 2023-08-08T13:45:02Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T13:33:16Z |
---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: modified
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train[:1000]
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 15.4275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modified
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6035
- Rouge1: 15.4275
- Rouge2: 14.2367
- Rougel: 15.4625
- Rougelsum: 15.4954
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4719 | 1.0 | 200 | 1.6035 | 15.4275 | 14.2367 | 15.4625 | 15.4954 | 20.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
MahyAss/MarineLePen_RVC_model
|
MahyAss
| 2023-08-08T13:43:27Z | 0 | 0 | null |
[
"rvc",
"model",
"french",
"politician",
"marine le pen",
"audio-to-audio",
"fr",
"region:us"
] |
audio-to-audio
| 2023-08-08T13:34:33Z |
---
language:
- fr
tags:
- rvc
- model
- french
- politician
- marine le pen
pipeline_tag: audio-to-audio
---
|
llmcode/bloom-3b
|
llmcode
| 2023-08-08T13:33:25Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T13:33:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Araaa/llmmedical
|
Araaa
| 2023-08-08T13:32:41Z | 0 | 0 | null |
[
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-08-03T10:12:14Z |
---
language:
- en
pipeline_tag: text-generation
---
|
kashif/stack-llama-2
|
kashif
| 2023-08-08T13:25:57Z | 1,514 | 15 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"trl",
"rlhf",
"en",
"dataset:lvwerra/stack-exchange-paired",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-04T14:43:35Z |
---
license: bigscience-openrail-m
datasets:
- lvwerra/stack-exchange-paired
language:
- en
tags:
- trl
- transformers
- rlhf
---
# Stack-Llama-2
[DPO](https://github.com/eric-mitchell/direct-preference-optimization) fine-tuned [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b). The model is designed to generate human-like responses to questions in Stack Exchange domains of programming, mathematics, physics, and more. For more info check out the [blog post](https://huggingface.co/blog/dpo-trl) and github [example](https://github.com/lvwerra/trl/tree/main/examples/research_projects/stack_llama_2/scripts).
## Uses
### Direct Use
- Long-form question-answering on topics of programming, mathematics, and physics
- Demonstrating a Large Language Model's ability to follow target behavior of generating answers to a question that would be highly rated on [Stack Exchange](https://stackexchange.com).
### Out of Scope Use
- Replacing human expertise
## Bias, Risks, and Limitations
- Inherits bias, risks, and limitations from the LLaMA model, as described in the [LLaMA Model Card Bias Evaluation](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#quantitative-analysis) and [Ethical Considerations](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#ethical-considerations).
- Retains biases present in the Stack Exchange dataset. Per the [latest developer survey for Stack Overflow](https://survey.stackoverflow.co/2022/),
which constitutes a significant part of the StackExchange data,
most users who answered the survey identified themselves as [White or European, men, between 25 and 34 years old, and based in the US (with a significant part of responders from India).](https://survey.stackoverflow.co/2022/#developer-profile-demographics)
- May generate answers that are incorrect or misleading.
- May copy answers from the training data verbatim.
- May generate language that is hateful or promotes discrimination ([example](https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/7#64376083369f6f907f5bfe4c)).
- May generate language that is offensive to direct or indirect users or to people or groups mentioned.
### Recommendations
- Answers should be validated through the use of external sources.
- Disparities between the data contributors and the direct and indirect users of the technology should inform developers in assessing what constitutes an appropriate use case.
- Further research is needed to attribute model generations to sources in the training data, especially in cases where the model copies answers from the training data.
## Training Details
### Training Data
Original datasets are described in [the LLaMA Model Card](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#training-dataset).
Fine-tuning datasets for this model are based on [Stack Exchange Paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired), which consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. Specifically:
**Traditional Fine-tuning:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune)
**DPO Training:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl)
### Training Procedure
The model was first fine-tuned on the Stack Exchange question and answer pairs and then fine-tuned via the DPO training procedure using the SFT model as the reference model. It is trained to respond to prompts with the following prompt template:
```
Question: <Query>
Answer: <Response>
```
|
harshil10/dolly-v2-3b
|
harshil10
| 2023-08-08T13:24:05Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-04T16:51:22Z |
---
license: mit
language:
- en
library_name: transformers
inference: false
datasets:
- databricks/databricks-dolly-15k
---
# dolly-v2-3b Model Card
## Summary
Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these larger models sizes:
* [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b`
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art.
It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking!
|
neolord/distilbert-base-uncased-finetuned-emotion
|
neolord
| 2023-08-08T13:13:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T11:56:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9250709778732631
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9255
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8077 | 1.0 | 250 | 0.3117 | 0.91 | 0.9082 |
| 0.2515 | 2.0 | 500 | 0.2169 | 0.9255 | 0.9251 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
jayavibhav/bert-classification-1500samples
|
jayavibhav
| 2023-08-08T13:12:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T13:07:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-classification-1500samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-classification-1500samples
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4366
- Accuracy: 0.882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 0.2065 | 0.932 |
| No log | 2.0 | 94 | 0.4366 | 0.882 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
cuixing/textual_inversion_cat-toytest08082049
|
cuixing
| 2023-08-08T13:10:03Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-08T12:49:44Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - cuixing/textual_inversion_cat-toytest08082049
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
himanshusrivastava/finetuned-indian-food-images
|
himanshusrivastava
| 2023-08-08T13:02:08Z | 257 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-08T13:00:17Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
model-index:
- name: finetuned-indian-food-images
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food-images
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
vishyrjun/med_qa
|
vishyrjun
| 2023-08-08T12:59:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-02T20:43:48Z |
# Med Text
## Dataset converted to Alpaca format
## Features
- This is a collection of already available datasets converted to Alpaca fomat
- This can be directly used to train LLM
- Below is the list of data sources where the dataset is prepared from
[BI55/MedText](https://huggingface.co/datasets/BI55/MedText)
|
elusive1337/KiXSTAr-RiP
|
elusive1337
| 2023-08-08T12:55:41Z | 0 | 0 | null |
[
"gaming",
"siege",
"twitch streamer",
"youtuber",
"kixstar",
"michael stockley",
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2023-08-08T12:51:00Z |
---
license: cc-by-4.0
language:
- en
tags:
- gaming
- siege
- twitch streamer
- youtuber
- kixstar
- michael stockley
---
|
yogjoshi14/q-FrozenLake-v1-4x4-noSlippery
|
yogjoshi14
| 2023-08-08T12:48:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T12:48:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yogjoshi14/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Camille02/t5-small-finetuned-wikisql-sql-nl-nl-sql
|
Camille02
| 2023-08-08T12:47:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T09:18:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-wikisql-sql-nl-nl-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql-sql-nl-nl-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the Wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Bleu: 41.883
- Gen Len: 16.6165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.2644 | 1.0 | 8097 | 0.2248 | 39.6535 | 16.6696 |
| 0.2386 | 2.0 | 16194 | 0.2063 | 40.9022 | 16.6533 |
| 0.2218 | 3.0 | 24291 | 0.1981 | 41.5751 | 16.6832 |
| 0.2212 | 4.0 | 32388 | 0.1940 | 41.7557 | 16.6145 |
| 0.2111 | 5.0 | 40485 | 0.1930 | 41.883 | 16.6165 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NiiCole/vivit-b-16x2-kinetics400-finetuned-ucf101-subset
|
NiiCole
| 2023-08-08T12:42:23Z | 66 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"base_model:finetune:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-08-03T15:01:37Z |
---
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivit-b-16x2-kinetics400-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-kinetics400-finetuned-ucf101-subset
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0546
- Accuracy: 0.9730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0224 | 0.25 | 300 | 0.0600 | 0.9730 |
| 0.0011 | 1.25 | 600 | 0.2143 | 0.9730 |
| 0.0004 | 2.25 | 900 | 0.0444 | 0.9730 |
| 0.0005 | 3.25 | 1200 | 0.0546 | 0.9730 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.