modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gokuls/hBERTv1_no_pretrain_rte
|
gokuls
| 2023-06-15T09:38:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T10:25:42Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_no_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_rte
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7561 | 1.0 | 26 | 0.6977 | 0.4729 |
| 0.7108 | 2.0 | 52 | 0.7333 | 0.4729 |
| 0.7378 | 3.0 | 78 | 0.6919 | 0.5271 |
| 0.7045 | 4.0 | 104 | 0.7052 | 0.5271 |
| 0.7077 | 5.0 | 130 | 0.7034 | 0.5271 |
| 0.6816 | 6.0 | 156 | 0.7515 | 0.5343 |
| 0.6692 | 7.0 | 182 | 0.7616 | 0.5235 |
| 0.5846 | 8.0 | 208 | 0.9617 | 0.4838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Outimus/ti
|
Outimus
| 2023-06-15T09:35:43Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-23T08:27:26Z |
---
license: creativeml-openrail-m
---
|
tux/q-Taxi-v3-2
|
tux
| 2023-06-15T09:21:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:17:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shouqiangli/test001
|
shouqiangli
| 2023-06-15T09:19:30Z | 0 | 0 |
allennlp
|
[
"allennlp",
"finance",
"text-classification",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] |
text-classification
| 2023-06-15T08:50:33Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: allennlp
pipeline_tag: text-classification
tags:
- finance
---
|
anilsekharc/ppo-Huggy
|
anilsekharc
| 2023-06-15T09:19:21Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:19:17Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: anilsekharc/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fx1H/q-Taxi-v3-v2
|
fx1H
| 2023-06-15T09:17:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:17:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fx1H/q-Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tux/q-Taxi-v3-1
|
tux
| 2023-06-15T09:16:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:11:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/q-Taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/hBERTv2_new_pretrain_48_KD_mnli
|
gokuls
| 2023-06-15T09:16:16Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T04:12:58Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_KD_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_mnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0986
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1024 | 1.0 | 3068 | 1.0991 | 0.3182 |
| 1.0989 | 2.0 | 6136 | 1.0986 | 0.3545 |
| 1.0987 | 3.0 | 9204 | 1.0986 | 0.3274 |
| 1.1005 | 4.0 | 12272 | 1.0986 | 0.3545 |
| 1.0986 | 5.0 | 15340 | 1.0986 | 0.3545 |
| 1.0987 | 6.0 | 18408 | 1.1010 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0987 | 0.3545 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AustinCarthy/MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-15T09:13:10Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T05:45:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0167
- Accuracy: 0.9982
- F1: 0.9807
- Precision: 0.9994
- Recall: 0.9628
- Roc Auc Score: 0.9814
- Tpr At Fpr 0.01: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0042 | 1.0 | 35625 | 0.0207 | 0.9962 | 0.9583 | 0.9987 | 0.921 | 0.9605 | 0.9318 |
| 0.002 | 2.0 | 71250 | 0.0188 | 0.9976 | 0.9737 | 0.9987 | 0.9498 | 0.9749 | 0.9544 |
| 0.0027 | 3.0 | 106875 | 0.0232 | 0.9974 | 0.9719 | 0.9979 | 0.9472 | 0.9736 | 0.9464 |
| 0.0007 | 4.0 | 142500 | 0.0155 | 0.9982 | 0.9805 | 0.9983 | 0.9632 | 0.9816 | 0.965 |
| 0.0007 | 5.0 | 178125 | 0.0167 | 0.9982 | 0.9807 | 0.9994 | 0.9628 | 0.9814 | 0.9688 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tux/q-Taxi-v3
|
tux
| 2023-06-15T09:11:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:07:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pushkin05/a2c-PandaReachDense-v2
|
pushkin05
| 2023-06-15T09:07:08Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T08:15:47Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.47 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tux/default_taxi
|
tux
| 2023-06-15T09:04:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:04:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: default_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/default_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Chaitanya14/flan-t5-large-finetuned-xsum
|
Chaitanya14
| 2023-06-15T08:54:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T08:49:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-large-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-finetuned-xsum
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 14.0152
- Rouge2: 6.2802
- Rougel: 13.7963
- Rougelsum: 13.9306
- Gen Len: 6.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 2.0 | 26 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 3.0 | 39 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 4.0 | 52 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 5.0 | 65 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 6.0 | 78 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 7.0 | 91 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 8.0 | 104 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 9.0 | 117 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
| No log | 10.0 | 130 | nan | 14.0152 | 6.2802 | 13.7963 | 13.9306 | 6.8696 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_KD_mnli
|
gokuls
| 2023-06-15T08:43:08Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T01:43:06Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3267900732302685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_mnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0977
- Accuracy: 0.3268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1011 | 1.0 | 3068 | 1.1001 | 0.3176 |
| 1.0991 | 2.0 | 6136 | 1.0978 | 0.3295 |
| 1.0983 | 3.0 | 9204 | 1.0985 | 0.3351 |
| 1.0984 | 4.0 | 12272 | 1.0983 | 0.3294 |
| 1.098 | 5.0 | 15340 | 1.0978 | 0.3264 |
| 1.098 | 6.0 | 18408 | 1.0979 | 0.3285 |
| 1.098 | 7.0 | 21476 | 1.0980 | 0.3272 |
| 1.098 | 8.0 | 24544 | 1.0981 | 0.3266 |
| 1.098 | 9.0 | 27612 | 1.0980 | 0.3256 |
| 1.098 | 10.0 | 30680 | 1.0985 | 0.3320 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
XiaHan19/shami-token
|
XiaHan19
| 2023-06-15T08:42:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T14:12:26Z |
---
license: apache-2.0
---
Shami Tokenizer
===
This is the tokenizer of Shami Model.
|
Manaro/Reinforce-pixelcopter
|
Manaro
| 2023-06-15T08:15:06Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-25T18:44:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 4.60 +/- 7.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pushkin05/Pyramids
|
pushkin05
| 2023-06-15T08:09:16Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-15T07:57:49Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pushkin05/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/sa_BERT_no_pretrain_wnli
|
gokuls
| 2023-06-15T08:08:30Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T14:36:04Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6866
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0074 | 1.0 | 7 | 0.6958 | 0.4366 |
| 0.6986 | 2.0 | 14 | 0.7035 | 0.4366 |
| 0.7007 | 3.0 | 21 | 0.6866 | 0.5634 |
| 0.7052 | 4.0 | 28 | 0.7037 | 0.4366 |
| 0.7008 | 5.0 | 35 | 0.6951 | 0.4366 |
| 0.7107 | 6.0 | 42 | 0.6908 | 0.5634 |
| 0.6963 | 7.0 | 49 | 0.6945 | 0.4366 |
| 0.7012 | 8.0 | 56 | 0.6894 | 0.5634 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pushkin05/ppo-SnowballTarget
|
pushkin05
| 2023-06-15T08:03:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-15T07:53:32Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pushkin05/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/sa_BERT_no_pretrain_stsb
|
gokuls
| 2023-06-15T08:03:24Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T14:26:57Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: sa_BERT_no_pretrain_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.12459536879199183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_stsb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5396
- Pearson: 0.1394
- Spearmanr: 0.1246
- Combined Score: 0.1320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.257 | 1.0 | 60 | 3.1111 | 0.0528 | 0.0709 | 0.0619 |
| 2.0476 | 2.0 | 120 | 2.5396 | 0.1394 | 0.1246 | 0.1320 |
| 1.8905 | 3.0 | 180 | 2.5928 | 0.1553 | 0.1593 | 0.1573 |
| 1.5383 | 4.0 | 240 | 3.1130 | 0.1930 | 0.2086 | 0.2008 |
| 1.3384 | 5.0 | 300 | 2.8651 | 0.1788 | 0.2014 | 0.1901 |
| 1.1299 | 6.0 | 360 | 2.9651 | 0.1818 | 0.1947 | 0.1883 |
| 1.0952 | 7.0 | 420 | 2.6404 | 0.2100 | 0.2124 | 0.2112 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Lajonbot/LaMini-Cerebras-256M-8500-steps-polish
|
Lajonbot
| 2023-06-15T07:58:02Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:databricks/databricks-dolly-15k",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T19:16:29Z |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of LaMini-Cerebras-256M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
These repository does not contain an tokenizer object, at the moment (#TODO).
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 'Lajonbot/LaMini-Cerebras-256M-8500-steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
```
|
octipuw/ppo-Huggy
|
octipuw
| 2023-06-15T07:54:53Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T07:54:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: octipuw/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
casque/majicmixLux_v2
|
casque
| 2023-06-15T07:51:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T07:34:10Z |
---
license: creativeml-openrail-m
---
|
gokuls/sa_BERT_no_pretrain_sst2
|
gokuls
| 2023-06-15T07:48:32Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T13:35:42Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8027522935779816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4637
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4863 | 1.0 | 702 | 0.4747 | 0.7890 |
| 0.2723 | 2.0 | 1404 | 0.4974 | 0.7901 |
| 0.2219 | 3.0 | 2106 | 0.4637 | 0.8028 |
| 0.1848 | 4.0 | 2808 | 0.7501 | 0.7833 |
| 0.1591 | 5.0 | 3510 | 0.5357 | 0.8005 |
| 0.1346 | 6.0 | 4212 | 0.5450 | 0.7833 |
| 0.1148 | 7.0 | 4914 | 0.8002 | 0.7741 |
| 0.1034 | 8.0 | 5616 | 0.8853 | 0.7821 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
prognosis/falcon40b-chunks-10k-v1_1
|
prognosis
| 2023-06-15T07:47:29Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T04:30:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon40b-chunks-10k-v1_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon40b-chunks-10k-v1_1
This model is a fine-tuned version of [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Falah/Mask_awesome_eli5_mlm_model
|
Falah
| 2023-06-15T07:47:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T07:05:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Mask_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mask_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7361 | 1.0 | 1131 | 2.0661 |
| 1.8475 | 2.0 | 2262 | 2.0314 |
| 1.983 | 3.0 | 3393 | 2.0085 |
| 2.0677 | 4.0 | 4524 | 1.9931 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
DAMO-NLP-SG/mt-llama-7b-delta
|
DAMO-NLP-SG
| 2023-06-15T07:21:35Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T02:52:02Z |
---
license: mit
---
# MT-LLaMA Model Card
## Model details
**Model type:**
MT-LLaMA is an open-source multi-task model trained by fine-tuning LLaMA on the massive tasks in [P3](https://huggingface.co/datasets/bigscience/P3) (i.e., T0 Train). Concretely, the used datasets during training and task taxonomy are listed below:
* Multi-choice QA: CommonsenseQA, Cosmos QA, DREAM, QuAIL, QuaRTz, QASC, QuaRel, SciQ, Social IQA, Wiki Hop, WiQA
* Extractive QA: Adversarial QA, DuoRC, Quoref, ROPES
* Close-Book QA: Hotpot QA, Wiki QA
* Sentiment Classification: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp
* Topic Classification: AG News, DBPedia, TREC
* Structure-to-Text Generation: Common Gen, Wiki Bio
* Text Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum
* Paraphrase Identification: MRPC, PAWS, QQP
**Organizations developing the model:**
The MT-LLaMA team with members from Alibaba Damo Academy and the Chinese University of Hong Kong.
## Intended use
You can try the codes from our [github repo](https://github.com/DAMO-NLP-SG/MT-LLaMA).
## Zero-shot Evaluation
We primarily follow the protocols of [Bigscience T0](https://openreview.net/forum?id=9Vrb9D0WI4) to assess the generalization capability of our Multi-task LLaMA to: (1) _**Unseen Datasets**_ (i.e., datasets from seen tasks); (2) _**Unseen Tasks**_.
#### Prompt Format
Extractive QA:
1. XQuAD, TyDiQA, MLQA, SQuAD
```angular2html
Input: Answer the question according to the context. Question: ${question}. Context: ${context}. Answer:
Output: ${Answer}
```
Sentiment:
1. SST-2
```angular2html
Input: ${sentence} Based on this review, would the user recommend this product? No or Yes?
Output: Yes / No
```
Multiple-Choice QA:
1. OpenbookQA
```angular2html
Input: ${question} Which is the correct answer? - (A) ${choiceA} - (B) ${choiceB} - (C) ${choiceC} - (D) ${choiceD}
Output: ${choiceA} / ${choiceB} / ${choiceC} / ${choiceD}
```
Sentence Completion:
1. COPA
```angular2html
Input: ${premise} {% if question == "cause" %} This happened because... {% else %} As a consequence... Help me pick the more plausible option: - ${text1} - ${text2}
Output: ${text1} / ${text2}
```
Coreference Resolution:
1. Winogrande:
```angular2html
Input: ${sentence} In the previous sentence, does _ refer to ${option1} or ${option2}?
Output: ${option1} / ${option2}
```
Word Sense Disambiguation:
1. WiC
```angular2html
Input: Does the word "${word}" have the same meaning in these two sentences? Yes, No? ${sentence1} ${sentence2}
Output: ${sentence1} / ${sentence2}
```
Natural Language Inference:
1. MNLI:
```angular2html
Input: ${premise} Question: Does this imply that ${hypothesis}? Please response with 'Yes', 'No', or 'Maybe'.
Output: Yes / No / Maybe
```
2. RTE
```angular2html
Input: Given ${premise} Is it guaranteed true that "${hypothesis}"? Yes or no?
Output: Yes / no
```
#### Results on _Unseen Datasets_
| Model | XQuAD-en (F1/EM) | TyDiQA-en (F1/EM) | MLQA-en (F1/EM) | SQuAD (F1/EM) | SST-2 (Acc.) | OpenbookQA (Acc.) |
|:------------|------------------|-------------------|-----------------|---------------|--------------|-------------------|
| LLaMA-7b | 9.5 / 2.0 | 14.3 / 2.6 | 13.4 / 3.3 | 29.4 / 11.5 | 50.5 | 32.4 |
| MT-LLaMA-7b | 42.3 / 31.1 | 38.9 / 26.9 | 45.4 / 31.5 | 85.9 / 77.6 | 92.6 | 38.2 |
#### Results on _Unseen Tasks_
| Model | COPA (Acc.) | Winogrande (Acc.) | WiC (Acc.) | MNLI (Acc.) | RTE (Acc.) |
|:------------|-------------|--------------------|------------|-------------|------------|
| LLaMA-7b | 56.0 | 49.3 | 51.7 | 30.2 | 52.7 |
| MT-LLaMA-7b | 88.0 | 54.9 | 52.2 | 49.6 | 79.1 |
## Acknowledgement
* Our training codes are largely borrowed from [FastChat](https://github.com/lm-sys/FastChat)
* We are also grateful for the efforts of [LLaMA](https://github.com/facebookresearch/llama) (from FAIR) and [T0](https://github.com/bigscience-workshop/t-zero) (from BigScience), which serve as the foundation of our work
If you find this resource useful, please cite the repo as follows:
```
@software{damonlpsg2023mtllama,
author = {Xu, Weiwen and Li, Xin and Bing, Lidong},
title = {Multi-task Instruction-tuned LLaMA},
year = 2023,
url = {https://github.com/DAMO-NLP-SG/MT-LLaMA}
}
```
|
casque/realisian_v40
|
casque
| 2023-06-15T07:04:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T06:49:10Z |
---
license: creativeml-openrail-m
---
|
tux/ppo-Huggy
|
tux
| 2023-06-15T06:52:23Z | 24 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T06:52:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tux/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Janxxx/Kokoroface
|
Janxxx
| 2023-06-15T06:43:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T06:41:58Z |
---
license: creativeml-openrail-m
---
|
khsyee/sam-vit-h-encoder-torchscript
|
khsyee
| 2023-06-15T06:18:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-14T07:39:53Z |
---
license: apache-2.0
---
## Run
Set conda env.
```
make env
conda activate sam-vit-h-encoder-torchscript
make setup
```
Load the SAM model and convert image encoder to torchscript.
```
python convert_torchscript.py
```
Check `model.pt` in `model_repository/sam_torchscript_fp32/1`.
|
gokuls/sa_BERT_no_pretrain_rte
|
gokuls
| 2023-06-15T05:47:43Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T13:29:15Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5306859205776173
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6909
- Accuracy: 0.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7596 | 1.0 | 26 | 0.6909 | 0.5307 |
| 0.6968 | 2.0 | 52 | 0.6914 | 0.5235 |
| 0.7026 | 3.0 | 78 | 0.6911 | 0.5307 |
| 0.6961 | 4.0 | 104 | 0.6928 | 0.5379 |
| 0.7114 | 5.0 | 130 | 0.6917 | 0.5271 |
| 0.7005 | 6.0 | 156 | 0.7069 | 0.4729 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chaitanya14/t5-base-finetuned-xsum
|
Chaitanya14
| 2023-06-15T05:45:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-14T08:03:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-xsum
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1894
- Rouge1: 34.6512
- Rouge2: 19.303
- Rougel: 32.5996
- Rougelsum: 32.471
- Gen Len: 7.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | 2.5904 | 31.7728 | 17.9506 | 31.6663 | 32.1172 | 7.2609 |
| No log | 2.0 | 26 | 2.5238 | 33.4485 | 17.2356 | 33.0828 | 33.5748 | 7.8696 |
| No log | 3.0 | 39 | 2.4739 | 31.3343 | 15.996 | 31.2341 | 31.646 | 9.0435 |
| No log | 4.0 | 52 | 2.4413 | 31.2481 | 16.395 | 31.2318 | 31.633 | 8.0435 |
| No log | 5.0 | 65 | 2.4156 | 28.9275 | 13.9479 | 28.7067 | 29.3501 | 8.2609 |
| No log | 6.0 | 78 | 2.3768 | 25.1989 | 12.2726 | 25.1301 | 25.331 | 9.6087 |
| No log | 7.0 | 91 | 2.3599 | 25.7646 | 12.6915 | 25.745 | 26.2307 | 9.0 |
| No log | 8.0 | 104 | 2.3460 | 27.3509 | 12.7398 | 27.3198 | 27.7394 | 8.3043 |
| No log | 9.0 | 117 | 2.3134 | 26.4175 | 12.3464 | 26.3116 | 26.6575 | 8.2609 |
| No log | 10.0 | 130 | 2.3089 | 25.4818 | 12.381 | 25.4929 | 25.9985 | 7.913 |
| No log | 11.0 | 143 | 2.2976 | 27.5605 | 13.3195 | 27.4523 | 27.7723 | 7.0 |
| No log | 12.0 | 156 | 2.2951 | 29.0566 | 13.3195 | 29.1363 | 29.4032 | 7.0 |
| No log | 13.0 | 169 | 2.2770 | 27.4586 | 13.2436 | 27.3583 | 27.7153 | 7.1304 |
| No log | 14.0 | 182 | 2.2675 | 27.4586 | 13.2436 | 27.3583 | 27.7153 | 7.1304 |
| No log | 15.0 | 195 | 2.2573 | 27.4586 | 13.2436 | 27.3583 | 27.7153 | 7.0 |
| No log | 16.0 | 208 | 2.2523 | 28.0801 | 13.2436 | 28.0208 | 28.4869 | 6.6957 |
| No log | 17.0 | 221 | 2.2410 | 29.7194 | 13.9268 | 29.6547 | 29.9577 | 6.7826 |
| No log | 18.0 | 234 | 2.2355 | 29.8201 | 13.9268 | 29.7716 | 30.1114 | 7.0 |
| No log | 19.0 | 247 | 2.2350 | 28.4851 | 13.6025 | 28.3633 | 28.6581 | 7.0435 |
| No log | 20.0 | 260 | 2.2219 | 26.4416 | 12.7674 | 25.9806 | 26.2749 | 6.6522 |
| No log | 21.0 | 273 | 2.2036 | 26.6032 | 13.0124 | 26.25 | 26.4627 | 6.8261 |
| No log | 22.0 | 286 | 2.1974 | 27.4656 | 13.0124 | 27.2275 | 27.2769 | 6.2174 |
| No log | 23.0 | 299 | 2.1962 | 26.4416 | 12.7674 | 25.9806 | 26.2749 | 6.6957 |
| No log | 24.0 | 312 | 2.2099 | 26.1686 | 12.6018 | 25.7118 | 25.9094 | 6.7391 |
| No log | 25.0 | 325 | 2.1990 | 27.3084 | 12.7709 | 27.0236 | 27.0354 | 6.1739 |
| No log | 26.0 | 338 | 2.1942 | 29.0825 | 12.7709 | 28.6736 | 29.3337 | 6.3478 |
| No log | 27.0 | 351 | 2.2058 | 29.0825 | 12.7709 | 27.5837 | 28.1575 | 6.0 |
| No log | 28.0 | 364 | 2.2012 | 29.0825 | 12.7709 | 27.5837 | 28.1575 | 6.0 |
| No log | 29.0 | 377 | 2.1992 | 28.1338 | 12.7709 | 26.6664 | 27.0659 | 6.8696 |
| No log | 30.0 | 390 | 2.1840 | 32.5399 | 17.1325 | 31.2654 | 31.4223 | 6.6087 |
| No log | 31.0 | 403 | 2.1824 | 32.5399 | 17.1325 | 31.2654 | 31.4223 | 6.7826 |
| No log | 32.0 | 416 | 2.1830 | 34.6512 | 19.303 | 33.1484 | 33.1392 | 6.8261 |
| No log | 33.0 | 429 | 2.1846 | 33.0599 | 19.303 | 31.3736 | 31.5588 | 6.6522 |
| No log | 34.0 | 442 | 2.1868 | 33.0599 | 19.303 | 31.3736 | 31.5588 | 6.6522 |
| No log | 35.0 | 455 | 2.1803 | 35.5538 | 19.303 | 34.135 | 34.0635 | 6.087 |
| No log | 36.0 | 468 | 2.1779 | 35.5538 | 19.303 | 33.5533 | 33.5085 | 6.087 |
| No log | 37.0 | 481 | 2.1770 | 34.9683 | 19.303 | 33.2356 | 33.1109 | 6.1739 |
| No log | 38.0 | 494 | 2.1845 | 35.5538 | 19.303 | 33.5533 | 33.5085 | 6.3478 |
| 1.8275 | 39.0 | 507 | 2.1867 | 34.6512 | 19.303 | 32.5996 | 32.471 | 7.0 |
| 1.8275 | 40.0 | 520 | 2.1881 | 36.4717 | 19.7895 | 34.9234 | 34.7549 | 6.913 |
| 1.8275 | 41.0 | 533 | 2.1877 | 36.4717 | 19.7895 | 34.9234 | 34.7549 | 6.913 |
| 1.8275 | 42.0 | 546 | 2.1842 | 36.4717 | 19.7895 | 34.9234 | 34.7549 | 6.913 |
| 1.8275 | 43.0 | 559 | 2.1869 | 36.4717 | 19.7895 | 34.3175 | 34.1247 | 6.913 |
| 1.8275 | 44.0 | 572 | 2.1914 | 36.4717 | 19.7895 | 34.3175 | 34.1247 | 6.913 |
| 1.8275 | 45.0 | 585 | 2.1921 | 36.4717 | 19.7895 | 34.3175 | 34.1247 | 6.913 |
| 1.8275 | 46.0 | 598 | 2.1910 | 36.4717 | 19.7895 | 34.3175 | 34.1247 | 6.913 |
| 1.8275 | 47.0 | 611 | 2.1903 | 34.6512 | 19.303 | 32.5996 | 32.471 | 7.0 |
| 1.8275 | 48.0 | 624 | 2.1904 | 34.6512 | 19.303 | 32.5996 | 32.471 | 7.0 |
| 1.8275 | 49.0 | 637 | 2.1896 | 34.6512 | 19.303 | 32.5996 | 32.471 | 7.0 |
| 1.8275 | 50.0 | 650 | 2.1894 | 34.6512 | 19.303 | 32.5996 | 32.471 | 7.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AustinCarthy/MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-15T05:44:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T03:32:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0273
- Accuracy: 0.9976
- F1: 0.9738
- Precision: 0.9989
- Recall: 0.9498
- Roc Auc Score: 0.9749
- Tpr At Fpr 0.01: 0.9544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0052 | 1.0 | 21554 | 0.0150 | 0.9972 | 0.9692 | 0.9979 | 0.9422 | 0.9711 | 0.9408 |
| 0.0046 | 2.0 | 43108 | 0.0107 | 0.9979 | 0.9778 | 0.9954 | 0.9608 | 0.9803 | 0.9458 |
| 0.002 | 3.0 | 64662 | 0.0198 | 0.9973 | 0.9708 | 0.9983 | 0.9448 | 0.9724 | 0.9488 |
| 0.0005 | 4.0 | 86216 | 0.0243 | 0.9974 | 0.9721 | 0.9977 | 0.9478 | 0.9738 | 0.947 |
| 0.0 | 5.0 | 107770 | 0.0273 | 0.9976 | 0.9738 | 0.9989 | 0.9498 | 0.9749 | 0.9544 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
soddokayo/klue-roberta-large-klue-2016klp-crime-2-ner
|
soddokayo
| 2023-06-15T05:42:19Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T05:35:12Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-2016klp-crime-2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-2016klp-crime-2-ner
This model is a fine-tuned version of [soddokayo/klue-roberta-large-klue-2016klp-ner](https://huggingface.co/soddokayo/klue-roberta-large-klue-2016klp-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4442
- Precision: 0.1538
- Recall: 0.1124
- F1: 0.1299
- Accuracy: 0.9005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 0.6609 | 0.0 | 0.0 | 0.0 | 0.8536 |
| No log | 2.0 | 14 | 0.4892 | 0.02 | 0.0112 | 0.0144 | 0.8817 |
| No log | 3.0 | 21 | 0.4442 | 0.1538 | 0.1124 | 0.1299 | 0.9005 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.11.0
|
gokuls/sa_BERT_no_pretrain_qqp
|
gokuls
| 2023-06-15T05:40:30Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T07:55:56Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sa_BERT_no_pretrain_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7934207271827851
- name: F1
type: f1
value: 0.6836123948783999
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_qqp
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4355
- Accuracy: 0.7934
- F1: 0.6836
- Combined Score: 0.7385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5241 | 1.0 | 3791 | 0.4947 | 0.7638 | 0.6550 | 0.7094 |
| 0.4527 | 2.0 | 7582 | 0.4524 | 0.7853 | 0.7027 | 0.7440 |
| 0.404 | 3.0 | 11373 | 0.4355 | 0.7934 | 0.6836 | 0.7385 |
| 0.3675 | 4.0 | 15164 | 0.4407 | 0.8038 | 0.7438 | 0.7738 |
| 0.3315 | 5.0 | 18955 | 0.4426 | 0.8060 | 0.7368 | 0.7714 |
| 0.3031 | 6.0 | 22746 | 0.4437 | 0.8067 | 0.7444 | 0.7755 |
| 0.2747 | 7.0 | 26537 | 0.4359 | 0.8046 | 0.7523 | 0.7785 |
| 0.2441 | 8.0 | 30328 | 0.4718 | 0.8074 | 0.7547 | 0.7811 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
octipuw/RLCourse
|
octipuw
| 2023-06-15T05:23:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T01:30:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PP)
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.56 +/- 24.11
name: mean_reward
verified: false
---
# **PP)** Agent playing **LunarLander-v2**
This is a trained model of a **PP)** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Manirathinam21/M-Bert-base-cased-language-detection
|
Manirathinam21
| 2023-06-15T05:09:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T08:10:30Z |
---
license: apache-2.0
---
# Multilingual_Language_Detection
This model is a fine-tuned version of [BERT-multilingual-base-(cased)](https://huggingface.co/bert-base-multilingual-cased) on the multilingual dataset.
It achieves the following results on the evaluation set:
- Training Loss : 0.018700
- Validation Loss : 0.054768
- Accuracy : 0.988864
- F1 : 0.988909
## Languages
It's trained in more than 22 different languages, they are listed below.
Arabic, Urdu, Tamil, Hindi, English, French, Spanish, Japanese, Chinese, Thai, Indonesian, Dutch, Korean, Latin, Persian, Portugese,
Pushto, Romanian, Russian, Swedish, Turkish, Estonian
## Model Description
The BERT model was pretrained on the 104 languages with the largest Wikipedias using a masked language modeling (MLM) objective. This model is case sensitive: it makes a difference between english and English.
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
## Training procedure
Fine-tuning was done via the `Trainer` API. Here is the [Colab notebook](https://colab.research.google.com/drive/1t-IKdGNlroc_-I2QmAPs4b3nHunpLMrn?usp=sharing) with the training code.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 8
- eval_batch_size: 16
- optimizer: Adam
- evaluation strategy: epoch
- num_epochs: 3
- warmup_steps: 100
## Training result
| Training Loss | Epoch | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:---------------:|:--------:|:------:|
| 0.002800 | 1 | 0.081287 | 0.9888 | 0.9890 |
| 0.001100 | 2 | 0.064479 | 0.9897 | 0.9898 |
| 0.018700 | 3 | 0.054768 | 0.9888 | 0.9889 |
|
jkeisling/laura-openllama13b-600bt-ggml
|
jkeisling
| 2023-06-15T05:03:24Z | 0 | 2 | null |
[
"arxiv:2305.11206",
"arxiv:2305.14314",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T03:27:36Z |
---
license: apache-2.0
---
## LaURA: *La*nguage *U*nderstanding, *R*endition, and *A*ssistance
LaURA is an open-source, permissively licensed chatbot trained for creative multi-turn conversation and writing. Inspired by the [Less is More for Alignment](https://arxiv.org/abs/2305.11206) paper, this preview release of LaURA is fine-tuned from the [OpenLLaMA-13B 600B token preview](https://huggingface.co/openlm-research/open_llama_13b_600bt) through [QLoRA](https://arxiv.org/abs/2305.14314) training on a small curated dataset derived from a highly prompted [Claude](https://www.anthropic.com/index/introducing-claude)-instant.
### Why use LaURA?
LaURA is:
- **Creative:** Commercial models like GPT-4 and Claude are trained for succinct, uncontroversial, and grounded output. By contrast, LaURA is trained to write lengthy, well-written, and opinionated answers.
- **Incisive and sardonic:** LaURA was trained to produce dry satire when prompted, and unlike corporate models trained on syncophantic output, is much more willing to analyze motivations critically and avoid parroting corporate PR.
- **Permissively licensed:** LaURA can be used commercially or in research settings without legal encumbrances.
This is an early version of LaURA and future releases will improve, but even this model demonstrates:
- **Less is more for alignment:** Only 1000 examples are required for strong, creative instruction-following and multiturn conversation
- **Fine-tuning is feasible even for single individuals:** This dataset and fine-tune was created in less than three weeks of work, evenings and weekends, using the wealth of existing open-source models and tools available.
#### Examples (cherry-picked)
Question-answering:
```
Human:
What are 3 albums I might enjoy if I like Tame Impala's "Currents"?
Laura:
Here are 3 albums you might enjoy if you like Tame Impala's "Currents":
Thom Yorke - Tomorrow's Modern Boxes: If you appreciate the dreamy synth-pop and psychedelic rock sounds of "Currents", Thom Yorke's experimental album could be a good complement. It's more minimalist and ambient, using mostly subtle synth textures and plucked strings to create a moody atmosphere. There are a few catchy pop hooks, but the songs stretch out for 7-12 minutes and don't have a traditional structure. Give it a listen and see if it resonates with you.
Animal Collective - Strawberry Jam: This album has a similar sound to "Currents", but is more textured and layered with acoustic instruments, found sounds, chimes and other organic textures. If you enjoy the dreamy psychedelia of "Currents", this may be a good option for expanding your musical horizons. There are some catchy pop tunes here, but it's probably best listened to in one sitting as it is quite long.
Jamie XX - In Colour: This is more dance-heavy than "Currents" and might appeal to you if you enjoy the groovy synth beats and shimmering textures of "Currents". It integrates elements of house, techno, downtempo and more. Try some tracks like "Gosh", "Strings", or "I Know There's Gonna Be (Good Times)" for synth-based grooves and melodies that start to create a vibe.
Human:
What are some albums that lean more into the psych-rock side?
Laura:
Here are some albums that lean more towards the psych-rock side:
Tame Impala - Lonerism: Tame Impala's debut full-length album continues their psychedelic, synth rock exploration at a grander scale with longer songs and expanded instrumentation. "Elephant" is a highlight, blending bluesy guitars, funky rhythms, layered echoes and a kind of lysergic atmosphere. Try any track really - they all blend together into an epic psych-rock journey.
M83 - Saturdays=Youth: This album by M83 explores dreamy textures, synths that shimmer and twinkle, and songs that stretch out for over 6 minutes in some cases. It's more downtempo and atmospheric than Tame Impala's "Currents" and there are more pop hooks and choruses to capture your attention. Try the haunting "Midnight City" or the upbeat "Reunion" to get a sense of how their sound differs in pace and mood but similarities in texture and psychedelia.
Pet Shop Boys - Yes: This album from the Pet Shop Boys continues their synth-pop sound but has a more dreamy, atmospheric quality than their early work. It's sleeker, more lush and cinematic with sounds that evoke rainstorms, glittering dance floors and fuzzy memories of past romances. Try "Love Is a Bourgeois Construct" or "Vocal" for a preview of how their sound evolves into something more ethereal over the course of this album.
Beck - Sea Change: Beck's third album is definitely less upbeat, more introspective and melancholic in tone than his early work. It explores moody synth-pop with downtempo rhythms, piano ballads and even some R&B-infused tunes like "Lost Cause". Try the haunting "Wave" or the infectious "Soul of a Man" for a taste of Beck's more subdued yet still captivating style on this album.
```
Analysis:
```
Human:
Write an incisive analysis of the impact of Bell Labs 50 years on.
Laura:
Here is my cynical analysis of Bell Labs for its half-century of dominance in innovation and economic prominence:
First, let us examine the source of its power. Bell's corporate structure was uniquely configured to foster scientific progress through top-down management, centralized control, standardization, and vertical integration. Technology developed at Bell Labs quickly spread to other entities through license agreements and strategic partnerships.
Second, the scientific ecosystem at Bell Labs was well-suited to supporting deep technological research. Each researcher had access to resources including financial backing, dedicated facilities, and technical staff support. They also shared a common vision with corporate leaders, prioritizing innovation for the sake of progress over short-term market interests.
Third, Bell's centralization enabled it to make large investments in research areas with potential for breaking new ground. While competing firms focused on incremental improvements to existing products, Bell could dive fully into ambitious projects like artificial intelligence or cellular technology without worrying about return on investment. The company's size and influence gave it the ability to take risks others were not willing or able to make.
Fourth, Bell Labs benefited from the virtuous cycle of prestige and success. Researchers gained recognition for breakthrough innovations, which in turn attracted more talent, funding and resources. Each breakthrough brought press coverage that further burnished the company's image as a hub of innovation. The result was an ongoing cycle of progress and profitability.
On the other side, we see an economic model increasingly at odds with emerging realities. The dominance of research alone is a risky strategy in an era where startups outpace established corporations in generating new technologies and business models. Applying the Bell Labs playbook to the modern tech industry would favor centralized models of innovation over distributed networks of hackers and entrepreneurs continually challenging old paradigms.
While existing companies may still adopt elements of the Bell practice, the ecosystem of startup funding, incentives for risk-taking and ability to capitalize on emergent trends will likely prevail in driving overall progress. Some combination of corporate-backed scientific research with open source software development and venture/angel investment will define successful firms of the future. But the era of centralized, vertically integrated innovation houses like Bell Labs is likely at an end.
In conclusion, while incredible technological advances flowed from Bell Laboratories, the corporate structure that enabled them has been steadily supplanted by more flexible models driven by networked communities and decentralized market forces. Still, some of the principles Bell Labs embodied - deep technical expertise, unwavering commitment to progress, and a focus on innovative solutions rather than short-term profits - remain relevant for companies seeking to create real value through technology. But their model of absolute control of research now seems outdated, less suited to challenging an ever-expanding realm of possibility.
Human:
What are some of their historical high-profile innovations?
Laura:
Some well-known innovations from the history of Bell Labs include:
• Transistor - The first commercial transistors were developed at Bell Labs in 1947, marking a major milestone in miniaturization and electronics. This invention laid the foundation for modern computers and electronic devices.
• Digital computer - The first general-purpose digital computer, UNIVAC I, was built for the US Census Bureau at Bell Labs in 1951. It paved the way for today's supercomputers and digital control systems.
• Communication satellite - The first communications satellite (Sputnik) was launched from a Bell Labs-designed rocket in 1957, demonstrating the potential of space-based internet connection and other applications.
• Laser - Chemist Charles Townes developed the first laser using a prototype of an element previously discovered at Bell Labs, molybdenum. His creation laid the foundation for fiber optics and modern telecommunications.
• CCD camera - Harry Ogletree pioneered the charge-coupled device (CCD), an essential component in digital cameras. The first commercially available CCD was developed at Bell Labs, opening up a new era of digital imaging.
• Computer mouse - Douglas Engelbart, an engineer at Xerox PARC working on projects with funding from Bell Labs, developed the computer mouse and other key input devices that laid the foundation for personal computing as we know it.
• Watson - The IBM Deep Blue supercomputer used systems of artificial intelligence (AI) to defeat Garry Kasparov in a famous chess match in 1997, demonstrating the potential applications of AI in fields like strategic thinking and advanced decision-making. Watson is named after Bell Labs founder Alexander Graham Bell.
• Wireless cellular network and phone - The first wireless cellular network and mobile phone, were developed at Bell Labs in the 1970s, opening up the era of mass mobile communication.
• Voice recognition - Researchers at Bell Labs developed the first voice recognition system for phones in the mid-1980s, laying the foundation for today's digital assistants and speech-enabled AI. The "Speechley" system was used by AT&T for customer service.
```
### Usage
These files are in GGML v3 format. You can use these files for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldCpp](https://github.com/LostRuins/koboldcpp)
- [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
- [ctransformers](https://github.com/marella/ctransformers)
#### How to run in llama.cpp
Once you've followed the [llama.cpp](https://github.com/ggerganov/llama.cpp) instructions to compile the program, go to the "files and versions" tab on this page and download the file `laura-openllama13b-600bt-q6_K.bin`. I recommend saving it to the `/models` folder in the `llama.cpp` repo. Running this model with `llama.cpp` will require at least 10 GB of RAM or VRAM.
To prompt the model, I recommend using the following command:
```
./main --color -ngl 60 -m ./models/laura-openllama13b-600bt-q6_K.bin -c 2048 -n 512 --mirostat 2 -i -r "Human:" -p "This is a conversation between the helpful, cynical, and well-spoken AI chatbot Laura and a human. Laura's responses clearly and insightfully answer the human's request. Continue the chat from Laura's perspective."
```
Explanation of these arguments:
- `-ngl-60`: Offload layers to GPU. Leave this off if running on CPU.
- `-c 2048`: Use the full 2048 token context window of the model.
- `-n 512`: Allow replies to be up to 512 tokens. The model was trained to output answers 300-700 tokens long.
- `-i -r "Human:"` Pause generation when "Human" shows up in the reply
- `--mirostat 2`: Use a sampling algorithm which allows more creative output
#### Prompt format
LaURA was trained using the following instruction format, which I recommend using as the base for your prompts:
```
This is a conversation between the helpful, cynical, and well-spoken AI chatbot Laura and a human. Laura's responses informatively, insightfully, and delightfully answer the human's request. Continue the chat as Laura.
---
Human:
human_prompt_1
Laura:
laura_answer_1
Human:
human_prompt_2
# ...
```
The LaURA model was trained to predict both answers _and_ questions. Consider letting LaURA generate your follow-up questions as well!
### License and intended use
The LaURA model is available under the [Apache 2.0 license](https://snyk.io/learn/apache-license/), like the OpenLlama base model it was trained on. However, I strongly recommend against using this model in production:
- The model was trained with helpful-only input, without guardrails. LaURA may produce problematic outputs.
- LaURA was trained on English conversation-only data and has a strong propensity to hallucinate. It is not competitive on standard benchmarks; this is left for future work.
### Model card
**Architecture:** LaURA was trained as a QLoRA adapter (hence the name!) on LLaMA models and is added to all layers. I use $r=96$. This repo containes the _merged_ adapter into the OpenLLaMA model, for ease of use.
**Base model:** This release is a low-rank fine-tune of the OpenLLaMA 13B preview, trained on 600B tokens. OpenLLaMA is a GPT-style decoder transformer language model pretrained on 600b tokens using Meta's [LLaMA architecture](https://arxiv.org/pdf/2302.13971.pdf). This LaURA release inherits the limitations of the base model. LaURA versions trained on the proprietary LLaMA-13B, other model sizes, and the full OpenLLAMA model are forthcoming.
**Finetuning data:** LaURA is trained on a dataset of 1000 question-answer pairs in a multi-turn format. The training data is derived from a larger dataset of ~3000 question-answer pairs from conversations with Anthropic's Claude-instant model, filtered and highly edited for quality, succinctness, and comprehensiveness. This dataset will eventually be open-sourced, but currently contains PII that needs to be redacted before release.
#### Training hyperparameters
- **Learning rate:** Linear decay from 5e-5
- **Batch size:** 4 with gradient accumulation to 128
- **Sequence length:** 1120
- **Epochs:** 10
- **Dimension:** 96
- **Optimizer:** Adam
Model loss at end of training was ~1.78, compared to ~1.63 for the same hyperparameters trained on LLaMA-13b.
### Why is it called LaURA?
It's an in-joke to the fine-tuning method, LoRA.
|
peteozegov/rl_course_vizdoom_health_gathering_supreme
|
peteozegov
| 2023-06-15T04:58:08Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-14T20:37:59Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.32 +/- 3.28
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r peteozegov/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ZidanSink/Megy
|
ZidanSink
| 2023-06-15T04:45:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T04:42:25Z |
---
license: creativeml-openrail-m
---
|
francoyisususu/domo
|
francoyisususu
| 2023-06-15T04:28:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T04:28:00Z |
---
license: creativeml-openrail-m
---
|
peteozegov/a2c-PandaReachDense-v2
|
peteozegov
| 2023-06-15T04:26:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-05T03:30:25Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.90 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
soddokayo/klue-roberta-large-klue-2016klp-crime-1-ner
|
soddokayo
| 2023-06-15T04:21:12Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T04:13:55Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-2016klp-crime-1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-2016klp-crime-1-ner
This model is a fine-tuned version of [soddokayo/klue-roberta-large-klue-2016klp-ner](https://huggingface.co/soddokayo/klue-roberta-large-klue-2016klp-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7156
- Precision: 0.4934
- Recall: 0.6222
- F1: 0.5504
- Accuracy: 0.7726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 0.8927 | 0.6778 | 0.6778 | 0.6778 | 0.7223 |
| No log | 2.0 | 46 | 0.7156 | 0.4934 | 0.6222 | 0.5504 | 0.7726 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.11.0
|
Laurie/qlora-v1
|
Laurie
| 2023-06-15T04:10:06Z | 0 | 0 | null |
[
"en",
"dataset:tatsu-lab/alpaca",
"license:apache-2.0",
"region:us"
] | null | 2023-06-01T10:15:29Z |
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
---
model_name_or_path = yahma/llama-13b-hf
learning_rate = 1e-4
max_steps = 10000
epoch = 3.08
train_loss = 0.7169
train_runtime = 1 day, 7:23:32.67
train_samples_per_second = 1.416
train_steps_per_second = 0.088
|
underfit/nli-finetune-model
|
underfit
| 2023-06-15T04:06:38Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:multi_nli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T03:22:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_nli
metrics:
- accuracy
model-index:
- name: nli-finetune-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: multi_nli
type: multi_nli
config: default
split: validation_matched
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7793333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-finetune-model
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the multi_nli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2551
- Accuracy: 0.7793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6813 | 1.0 | 2500 | 0.6655 | 0.7657 |
| 0.5632 | 2.0 | 5000 | 1.0409 | 0.778 |
| 0.3753 | 3.0 | 7500 | 1.2551 | 0.7793 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_stsb
|
gokuls
| 2023-06-15T04:05:48Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T03:55:09Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_48_KD_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.23740661914182784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2142
- Pearson: 0.2484
- Spearmanr: 0.2374
- Combined Score: 0.2429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3019 | 1.0 | 45 | 2.6041 | 0.1631 | 0.1541 | 0.1586 |
| 2.1689 | 2.0 | 90 | 2.4035 | 0.1451 | 0.1550 | 0.1500 |
| 1.8521 | 3.0 | 135 | 2.2142 | 0.2484 | 0.2374 | 0.2429 |
| 1.6095 | 4.0 | 180 | 2.6155 | 0.2618 | 0.2566 | 0.2592 |
| 1.3325 | 5.0 | 225 | 2.9218 | 0.3160 | 0.3098 | 0.3129 |
| 1.0639 | 6.0 | 270 | 2.2315 | 0.3486 | 0.3476 | 0.3481 |
| 0.8434 | 7.0 | 315 | 2.5364 | 0.3459 | 0.3343 | 0.3401 |
| 0.7193 | 8.0 | 360 | 2.3979 | 0.3661 | 0.3633 | 0.3647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casque/majicmixHorror_v1
|
casque
| 2023-06-15T03:54:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T03:32:34Z |
---
license: creativeml-openrail-m
---
|
nolanaatama/nythngv45
|
nolanaatama
| 2023-06-15T03:53:40Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T03:42:59Z |
---
license: creativeml-openrail-m
---
|
WangZeJun/bloom-820m-chat
|
WangZeJun
| 2023-06-15T03:45:15Z | 1,598 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T03:07:49Z |
---
license: bigscience-bloom-rail-1.0
---
https://github.com/zejunwang1/bloom_tuning
可以通过如下代码调用 bloom-820m-chat 模型来生成对话:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
model_name_or_path = "WangZeJun/bloom-820m-chat"
tokenizer = BloomTokenizerFast.from_pretrained(model_name_or_path)
model = BloomForCausalLM.from_pretrained(model_name_or_path).cuda()
model = model.eval()
input_pattern = "{}</s>"
text = "你好"
input_ids = tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
input_ids_len = input_ids.size(1)
response_ids = outputs[0][input_ids_len:]
response = tokenizer.decode(response_ids)
print(response)
```
|
gokuls/hBERTv2_new_no_pretrain_wnli
|
gokuls
| 2023-06-15T03:24:43Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:16:44Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_wnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6874
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9765 | 1.0 | 5 | 0.6952 | 0.4366 |
| 0.723 | 2.0 | 10 | 0.6938 | 0.4648 |
| 0.7209 | 3.0 | 15 | 0.6902 | 0.5634 |
| 0.7183 | 4.0 | 20 | 0.7155 | 0.5634 |
| 0.7155 | 5.0 | 25 | 0.6875 | 0.5634 |
| 0.7027 | 6.0 | 30 | 0.6978 | 0.4366 |
| 0.6966 | 7.0 | 35 | 0.7161 | 0.4366 |
| 0.7077 | 8.0 | 40 | 0.6926 | 0.5634 |
| 0.7048 | 9.0 | 45 | 0.7409 | 0.4366 |
| 0.7386 | 10.0 | 50 | 0.6874 | 0.5634 |
| 0.7104 | 11.0 | 55 | 0.6875 | 0.5634 |
| 0.7061 | 12.0 | 60 | 0.7088 | 0.4366 |
| 0.6951 | 13.0 | 65 | 0.7009 | 0.4507 |
| 0.6995 | 14.0 | 70 | 0.7050 | 0.4366 |
| 0.692 | 15.0 | 75 | 0.6976 | 0.3521 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
raphaesq/autotrain-filipino-hate-speech-roberta-tagalog-base-66889136741
|
raphaesq
| 2023-06-15T03:23:08Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:raphaesq/autotrain-data-filipino-hate-speech-roberta-tagalog-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T03:22:01Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- raphaesq/autotrain-data-filipino-hate-speech-roberta-tagalog-base
co2_eq_emissions:
emissions: 0.20254513588697542
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 66889136741
- CO2 Emissions (in grams): 0.2025
## Validation Metrics
- Loss: 0.301
- Accuracy: 0.857
- Precision: 0.517
- Recall: 0.500
- AUC: 0.876
- F1: 0.509
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/raphaesq/autotrain-filipino-hate-speech-roberta-tagalog-base-66889136741
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("raphaesq/autotrain-filipino-hate-speech-roberta-tagalog-base-66889136741", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("raphaesq/autotrain-filipino-hate-speech-roberta-tagalog-base-66889136741", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ClueAI/ChatYuan-7B
|
ClueAI
| 2023-06-15T03:21:01Z | 5 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"zh",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T07:34:54Z |
---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
ChatYuan-7B是一个支持中英双语的功能型对话语言大模型。它是基于LLama-7B模型上继续进行三阶段训练的模型。
三阶段如下:
1. 在中文通用语料上继续预训练500亿中文token
2. 在数百种任务集上进行任务式指令微调训练
3. 在数百万人类反馈数据集上进行指令微调训练
## 更多细节参考[GitHub](https://github.com/clue-ai/ChatYuan-7B)
## 使用方式
为了遵守LLaMA模型许可证,我们将ChatYuan-7B权重发布为增量权重。您可以将我们的增量权重与原始的LLaMA权重相加,得到ChatYuan-7B权重。
1. 通过原始[LLaMA-7B](https://github.com/facebookresearch/llama)生成LLaMA的hf模型(LLaMA-7B-HF),可以参考[指导](https://huggingface.co/docs/transformers/main/model_doc/llama), 也可以直接使用[llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf)
2. 合并LLaMA-7B的hf模型和ChatYuan-7B模型成ChatYuan-7B-merge
### 合并脚本
[apply_delta.py](https://huggingface.co/ClueAI/ChatYuan-7B/blob/main/apply_delta.py)
```shell
python3 apply_delta.py --base ~/model_weights/LLaMA-7B-HF --delta ~/model_weights/ChatYuan-7B --target ~/model_weights/ChatYuan-7B-merge
```
## 加载方式
```python
from transformers import LlamaForCausalLM, AutoTokenizer
import torch
import sys
ckpt = "~/model_weights/ChatYuan-7B-merge"
device = torch.device('cuda')
model = LlamaForCausalLM.from_pretrained(ckpt)
tokenizer = AutoTokenizer.from_pretrained(ckpt)
```
## 推理方式
```python
def answer(prompt):
prompt = f"用户:{prompt}\n小元:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024, do_sample = True, temperature = 0.7)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = output[len(prompt):]
return response
```
## 示例
```
input: 学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字
output: 作为学前教育专业的学生,在岗位实习中可能会遇到各种各样的问题,这些问题不仅影响了我们的学习和成长,也影响了我们的教育质量。为了帮助学生更好地完成实习工作,我们提出以下改进措施:
一、提高教育质量
在学前教育专业岗位实习中,教育质量是关键。学生应该掌握教育方法和技能,并且在实践中学习。为了提高教育质量,学校应该从以下几个方面着手:
1. 提供高质量的教育资源。学校应该提供各种教育资源,包括教材、工具和软件等,以帮助学生更好地完成实习工作。
2. 提高教师质量。教师应该具备丰富的教育经验和专业知识,以便更好地指导学生。
3. 提供培训和评估。学校应该为学生提供培训和评估,以帮助学生更好地掌握技能和知识。
二、加强沟通与合作
在岗位实习中,学生应该与老师和同学进行沟通和合作。老师应该与学生保持良好的沟通,以便更好地指导学生。同时,学生也应该与同学进行合作,以便更好地完成实习工作。
为了加强沟通与合作,学校应该从以下几个方面着手:
1. 提供交流渠道。学校应该为学生提供交流渠道,包括线上和线下的交流方式,以便学生与老师和同学之间进行良好的沟通。
2. 提供团队合作机会。学校应该为学生提供团队合作机会,以便学生能够更好地了解和协作。
3. 提供培训和指导。学校应该为学生提供培训和指导,以便学生能够更好地理解和应对问题。
三、保证实习安全
在岗位实习中,学生应该保证自身的安全。学校应该为学生提供安全保障,包括在实习过程中提供帮助和指导,以及在紧急情况下提供支持和帮助。
为了保证实习安全,学校应该从以下几个方面着手:
1. 提供安全教育。学校应该为学生提供安全教育,以便学生能够更好地了解安全要求和安全方法。
2. 提供安全指导。学校应该为学生提供安全指导,以便学生能够更好地掌握实习安全方法。
3. 提供安全保障。学校应该为学生提供安全保障,包括在实习过期中提供帮助和指导,以及在紧急情况下提予支持和帮助。
总之,学前教育专业岗位实习是学生成长和发展的重要阶段。通过提高教育质量、加强沟通与合作和保证实习安全等措施,学校可以帮助学生更好地完成实习工作,提高教育质量和专业技能。
```
```
input: 根据标题生成文章:标题:屈臣氏里的化妆品到底怎么样?正文:化妆品,要讲究科学运用,合理搭配。屈臣氏起码是正品连锁店。请继续后面的文字。
output: 作为一家有着数十年历史的正品连锁店,屈臣氏在市场上拥有一定的口碑和信誉。而对于喜欢化妆品的女性来说,屈臣氏里的化妆品也是不可忽视的选择。那么,屈臣氏里的化妆品到底怎么样呢?
首先,我们需要了解一下屈臣氏的采购政策。屈臣氏一直秉持着“质量第一、信誉商誉、顾客至上”的经营理念,严格按照国家的相关质量标准进行采购管理。在采购过程中,屈臣氏会对每一家供应商进行详细的评估和审核,以确保所采购的商品符合质量标准。
其次,我们需要了解一下屈臣氏销售的化妆品品牌。屈臣氏销售的化妆品品牌非常丰富,包括国内外著名品牌如兰蔻、雅诗兰黛、娇兰、资生堂等。屈臣氏还会不定期地邀请国内外知名化妆品专家来店内进行化妆品推介和使用指导,为顾客提供更专业的服务。
最后,我们需要了解一下屈臣氏销售的化妆品的价格。相比于一些高端专业的化妆品店,屈臣氏的化妆品价格相对较为亲民,但在保证品质的前提下,仍然可以满足大多数消费者的需求。
总之,屈臣氏里的化妆品在品质、销售渠道和价格等方面都具有一定的优势。但是,我们在选择化妆品时,还是需要注重科学的运用和合理的搭配,以达到更好的化妆效果。
```
```
input: 帮我写一个英文营销方案,针对iphone
output: Marketing Plan for iPhones
Introduction
iPhones have become the flagship devices of Apple and have gained immense popularity worldwide. With a sleek design, advanced features, and a smooth user interface, iPhones attract and retain customers through their lifetime. Therefore, developing a marketing plan for iPhones is essential to ensure the continued success and growth of the brand.
Objectives
The primary objective of this marketing plan is to increase the brand awareness and demand for iPhones by targeting potential customers and creating a strong relationship with them.
Target Market
The target audience for this marketing plan is people who are in the market for a new smartphone. The target audience is classified into three segments:
1. Upgraders: People who are currently using older versions of iPhones and are looking for a new device with advanced features.
2. Switchers: People who are currently using Android smartphones and are looking to switch to iPhones.
3. Newbies: People who have never used an iPhone before and are looking to experience its features for the first time.
Tactics
To achieve the objectives, the following tactics will be implemented:
1. Advertising
Apple's advertising strategy is well-known for its focus on creating an emotional connection with the audience through its advertisements. The marketing campaign will focus on highlighting the advanced features of iPhones and the benefits of using them. The advertisements will be placed on various platforms, including social media, television, and billboards.
2. Product Demonstrations
Product demonstrations will be held at Apple stores and other retail outlets to allow potential customers to experience the advanced features of iPhones. This will help people make informed decisions about their next smartphone.
3. Referral Program
A referral program will be launched to encourage existing customers to refer their friends to the brand. This will help to attract new customers and retain existing ones.
4. Content Marketing
Content marketing will be used to create and share valuable content that will be useful to potential customers. This will include tutorials, guides, and other content that will help people make informed decisions about their next smartphone.
Conclusion
In conclusion, this marketing plan will help Apple achieve its goals of increasing brand awareness and demand for its products. The tactics employed in this plan will help to attract new customers and retain existing ones by creating a strong relationship with the brand.
```
## 限制
在当前基础模型和数据训练的模型中仍存在一些问题:
1. 当要求遵循与事实相关的指令时,模型可能会生成事实错误。
2. 由于模型仍然难以识别潜在的有害指令,偶尔会生成有害的回应。
3. 在推理和编码方面仍然需要改进
由于模型仍然存在限制,我们要求开发者只能将开源代码、数据、模型以及通过该项目生成的其他任何成果用于研究目的。不允许商业用途和其他潜在有害的使用场景。
|
gokuls/hBERTv2_new_no_pretrain_stsb
|
gokuls
| 2023-06-15T03:17:28Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:09:44Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_no_pretrain_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.20926356415783265
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_stsb
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2174
- Pearson: 0.1946
- Spearmanr: 0.2093
- Combined Score: 0.2019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3893 | 1.0 | 45 | 2.3698 | 0.1204 | 0.1138 | 0.1171 |
| 1.9589 | 2.0 | 90 | 2.2174 | 0.1946 | 0.2093 | 0.2019 |
| 1.6743 | 3.0 | 135 | 2.3481 | 0.2144 | 0.2207 | 0.2175 |
| 1.4068 | 4.0 | 180 | 2.5921 | 0.2472 | 0.2519 | 0.2496 |
| 1.2205 | 5.0 | 225 | 2.6279 | 0.2718 | 0.2701 | 0.2709 |
| 0.9353 | 6.0 | 270 | 2.5440 | 0.3117 | 0.3213 | 0.3165 |
| 0.7662 | 7.0 | 315 | 2.3053 | 0.3501 | 0.3519 | 0.3510 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_no_pretrain_sst2
|
gokuls
| 2023-06-15T03:02:04Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T11:26:08Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.7924311926605505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_sst2
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4682
- Accuracy: 0.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.438 | 1.0 | 527 | 0.4682 | 0.7924 |
| 0.2599 | 2.0 | 1054 | 0.6088 | 0.7890 |
| 0.2173 | 3.0 | 1581 | 0.4971 | 0.7901 |
| 0.1972 | 4.0 | 2108 | 0.6153 | 0.7924 |
| 0.1838 | 5.0 | 2635 | 0.5523 | 0.7947 |
| 0.1812 | 6.0 | 3162 | 0.6560 | 0.8073 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Laurie/llama7b-lora-merged
|
Laurie
| 2023-06-15T03:01:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-14T03:10:27Z |
Merge llama7b and lora on alpaca dataset
|
morokosi/q-Taxi-v3
|
morokosi
| 2023-06-15T03:00:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T03:00:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="morokosi/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
soddokayo/klue-roberta-large-klue-2016klp-crime1-ner
|
soddokayo
| 2023-06-15T02:39:40Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T02:28:27Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-2016klp-crime1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-2016klp-crime1-ner
This model is a fine-tuned version of [soddokayo/klue-roberta-large-klue-2016klp-ner](https://huggingface.co/soddokayo/klue-roberta-large-klue-2016klp-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2425
- Precision: 0.9392
- Recall: 0.9444
- F1: 0.9418
- Accuracy: 0.9517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 0.9424 | 0.6486 | 0.6667 | 0.6575 | 0.7183 |
| No log | 2.0 | 46 | 0.6026 | 0.7098 | 0.7611 | 0.7346 | 0.8410 |
| No log | 3.0 | 69 | 0.4010 | 0.8703 | 0.8944 | 0.8822 | 0.9235 |
| No log | 4.0 | 92 | 0.2797 | 0.9066 | 0.9167 | 0.9116 | 0.9416 |
| No log | 5.0 | 115 | 0.2845 | 0.9227 | 0.9278 | 0.9252 | 0.9457 |
| No log | 6.0 | 138 | 0.2425 | 0.9392 | 0.9444 | 0.9418 | 0.9517 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.11.0
|
gokuls/add_BERT_no_pretrain_mnli
|
gokuls
| 2023-06-15T02:32:17Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T09:50:23Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: add_BERT_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_no_pretrain_mnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0984
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1024 | 1.0 | 3068 | 1.0987 | 0.3182 |
| 1.0997 | 2.0 | 6136 | 1.0986 | 0.3274 |
| 1.0988 | 3.0 | 9204 | 1.0984 | 0.3545 |
| 1.0987 | 4.0 | 12272 | 1.0986 | 0.3545 |
| 1.0986 | 5.0 | 15340 | 1.0986 | 0.3545 |
| 1.0987 | 6.0 | 18408 | 1.0986 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0986 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3274 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation
|
ZhangCheng
| 2023-06-15T02:03:11Z | 115 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"Question Generation",
"en",
"dataset:squad",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5v1.1 <context> Cheng fine-tuned T5v1.1 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5v1.1 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5v1.1-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer:str, context:str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids'].to(self.device)
attention_mask = encoding['attention_mask'].to(self.device)
outputs = self.model.generate(
input_ids = input_ids,
attention_mask = attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens = True,
clean_up_tokenization_spaces = True
)
return {'question': question, 'answer': answer}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5v1.1 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5v1.1 on SQuAD?
```
|
ZhangCheng/T5-Base-finetuned-for-Question-Generation
|
ZhangCheng
| 2023-06-15T02:00:56Z | 176 | 14 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"Question Generation",
"en",
"dataset:squad",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5 <context> Cheng fine-tuned T5 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self, model_dir=None):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer: str, context: str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
outputs = self.model.generate(
input_ids=input_ids,
attention_mask=attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
return {'question': question, 'answer': answer, 'context': context}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5 on SQuAD dataset for question generation?
```
|
arminmrm93/qtable_taxi_v3
|
arminmrm93
| 2023-06-15T01:46:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T01:46:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qtable_taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arminmrm93/qtable_taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/hBERTv1_new_pretrain_48_KD_wnli
|
gokuls
| 2023-06-15T01:40:47Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T01:36:10Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6854
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7548 | 1.0 | 5 | 0.7991 | 0.4366 |
| 0.7094 | 2.0 | 10 | 0.7048 | 0.4366 |
| 0.7546 | 3.0 | 15 | 0.6934 | 0.4789 |
| 0.7016 | 4.0 | 20 | 0.6999 | 0.4366 |
| 0.705 | 5.0 | 25 | 0.6859 | 0.5634 |
| 0.701 | 6.0 | 30 | 0.7016 | 0.4366 |
| 0.6975 | 7.0 | 35 | 0.6854 | 0.5634 |
| 0.6952 | 8.0 | 40 | 0.6945 | 0.4507 |
| 0.7021 | 9.0 | 45 | 0.7181 | 0.4366 |
| 0.7039 | 10.0 | 50 | 0.6879 | 0.5634 |
| 0.698 | 11.0 | 55 | 0.6941 | 0.4507 |
| 0.6953 | 12.0 | 60 | 0.6997 | 0.4225 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
arminmrm93/q-FrozenLake-v1-4x4-noSlippery
|
arminmrm93
| 2023-06-15T01:38:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T01:38:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arminmrm93/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
joohwan/a9a7a9a0
|
joohwan
| 2023-06-15T01:33:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T01:12:01Z |
---
tags:
- generated_from_trainer
model-index:
- name: a9a7a9a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# a9a7a9a0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
coreml-community/coreml-lyriel_v16
|
coreml-community
| 2023-06-15T01:28:41Z | 0 | 6 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-23T01:02:09Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
inference: true
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).<br>
# lyriel_v16
Source(s): [CivitAI](https://civitai.com/models/22922)<br>
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c028d3fe-669c-44c0-9eac-b9d67f729492/width=450/00014-1261263585.jpeg" alt="00014-1261263585.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8aa5c19b-57a2-4a02-9a62-61320f459ef5/width=450/00008-3464469766.jpeg" alt="00008-3464469766.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/95d66227-e6ac-4140-b187-355e111ee197/width=450/00018-2085175018.jpeg" alt="00018-2085175018.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/21ac6b1e-ac11-4012-80d1-b0ea14951c5f/width=450/00029-1583775480.jpeg" alt="00029-1583775480.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8ce261c6-759d-46fc-b8c1-f4a46cb449d4/width=450/00023-376328055.jpeg" alt="00023-376328055.png" style="max-height: 100%; max-width: 100%;">
Hello, the model was created as an artistic style, the model can do almost anything, the main thing is to follow the promt, hands and eyes looks good for the most cases
Model Information:
This model is generally designed for portraits and full-length anime style photos. Fantastic landscapes are quite decent. And it doesn't require kilometer-long queries to get a high-quality result.
Recommend: DPM++2M Karras, Clip skip 2 Sampler, Steps: 25-35+
This model would not have come out without XpucT's help, which made Deliberate
If you have the desire and means to support future models, here you go:
Advanced Cash - U 1281 8592 6885 , E 8642 3924 9315 , R 1339 7462 2915
PEYEER - P1075963156
I hope you like it, thanks for the feedback
Hide
|
gokuls/hBERTv2_new_no_pretrain_rte
|
gokuls
| 2023-06-15T01:26:33Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T11:20:34Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5306859205776173
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_rte
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6981
- Accuracy: 0.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7697 | 1.0 | 20 | 0.7526 | 0.5271 |
| 0.7285 | 2.0 | 40 | 0.7208 | 0.5271 |
| 0.7201 | 3.0 | 60 | 0.7112 | 0.5343 |
| 0.7043 | 4.0 | 80 | 0.6981 | 0.5307 |
| 0.6569 | 5.0 | 100 | 0.7251 | 0.5235 |
| 0.5762 | 6.0 | 120 | 0.8571 | 0.4765 |
| 0.4336 | 7.0 | 140 | 0.9540 | 0.4765 |
| 0.3299 | 8.0 | 160 | 1.2464 | 0.4838 |
| 0.2561 | 9.0 | 180 | 1.4299 | 0.5018 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_no_pretrain_qqp
|
gokuls
| 2023-06-15T01:16:07Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T06:09:57Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_no_pretrain_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7856047489488004
- name: F1
type: f1
value: 0.6930594900849859
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_qqp
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4537
- Accuracy: 0.7856
- F1: 0.6931
- Combined Score: 0.7393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5037 | 1.0 | 2843 | 0.4537 | 0.7856 | 0.6931 | 0.7393 |
| 0.4066 | 2.0 | 5686 | 0.4549 | 0.7946 | 0.6758 | 0.7352 |
| 0.3367 | 3.0 | 8529 | 0.4630 | 0.7950 | 0.6650 | 0.7300 |
| 0.2876 | 4.0 | 11372 | 0.5279 | 0.8180 | 0.7598 | 0.7889 |
| 0.2498 | 5.0 | 14215 | 0.4857 | 0.8217 | 0.7650 | 0.7933 |
| 0.2371 | 6.0 | 17058 | 0.5113 | 0.8216 | 0.7376 | 0.7796 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mannamvarun/image_denoising
|
mannamvarun
| 2023-06-15T00:55:43Z | 0 | 1 | null |
[
"image-to-image",
"en",
"region:us"
] |
image-to-image
| 2023-06-15T00:45:52Z |
---
language:
- en
pipeline_tag: image-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Varun Mannam
- **Shared by [optional]:** Varun Mannam
- **Model type:** Image-to-image regression
- **Language(s) (NLP):** [More Information Needed]
- **License:** GPL
- **Finetuned from model [optional]:** No need
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ND-HowardGroup/Instant-Image-Denoising/tree/master/Plugins/Image_Denoising_Plugins_Journal
- **Paper [optional]:** https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-4-335&id=470659
- **Demo [optional]:** https://github.com/ND-HowardGroup/Instant-Image-Denoising/tree/master/Plugins/Model_validation
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-15T00:03:58Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-14T21:51:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0219
- Accuracy: 0.9978
- F1: 0.9769
- Precision: 0.9979
- Recall: 0.9568
- Roc Auc Score: 0.9783
- Tpr At Fpr 0.01: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0063 | 1.0 | 21554 | 0.0127 | 0.9974 | 0.9725 | 0.9933 | 0.9526 | 0.9761 | 0.914 |
| 0.0038 | 2.0 | 43108 | 0.0136 | 0.9977 | 0.9751 | 0.9940 | 0.957 | 0.9784 | 0.8992 |
| 0.0015 | 3.0 | 64662 | 0.0147 | 0.9977 | 0.9755 | 0.9958 | 0.956 | 0.9779 | 0.9498 |
| 0.001 | 4.0 | 86216 | 0.0123 | 0.9981 | 0.9798 | 0.9979 | 0.9624 | 0.9812 | 0.9622 |
| 0.0 | 5.0 | 107770 | 0.0219 | 0.9978 | 0.9769 | 0.9979 | 0.9568 | 0.9783 | 0.9552 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
escaa/my_awesome_model
|
escaa
| 2023-06-14T23:59:46Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-13T14:19:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: escaa/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# escaa/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0804
- Validation Loss: 0.1653
- Train Accuracy: 0.9463
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2161 | 0.1680 | 0.9432 | 0 |
| 0.1254 | 0.1625 | 0.9455 | 1 |
| 0.0804 | 0.1653 | 0.9463 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
w4yw4rd/q-FrozenLake-v1-4x4-noSlippery
|
w4yw4rd
| 2023-06-14T23:45:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-14T23:45:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="w4yw4rd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sdan/simple-embeddings
|
sdan
| 2023-06-14T23:42:54Z | 5 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"region:us"
] |
feature-extraction
| 2023-06-14T23:40:03Z |
---
library_name: "transformers.js"
---
https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
peteozegov/q-Taxi-v3
|
peteozegov
| 2023-06-14T23:36:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-30T22:06:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 8.40 +/- 2.54
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="peteozegov/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vdo/Video-LLaMA-Series
|
vdo
| 2023-06-14T23:18:34Z | 0 | 10 | null |
[
"visual-question-answering",
"en",
"zh",
"arxiv:2306.02858",
"license:bsd-3-clause",
"region:us"
] |
visual-question-answering
| 2023-06-14T21:17:36Z |
---
license: bsd-3-clause
language:
- en
- zh
pipeline_tag: visual-question-answering
---
# Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
This is the Hugging Face repo for storing pre-trained & fine-tuned checkpoints of our [Video-LLaMA](https://arxiv.org/abs/2306.02858), which is a multi-modal conversational large language model with video understanding capability.
## Vision-Language Branch
| Checkpoint | Link | Note |
|:------------|-------------|-------------|
| pretrain-vicuna7b | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/pretrain_vicuna7b-v2.pth) | Pre-trained on WebVid (2.5M video-caption pairs) and LLaVA-CC3M (595k image-caption pairs) |
| finetune-vicuna7b-v2 | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/finetune-vicuna7b-v2.pth) | Fine-tuned on the instruction-tuning data from [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4), [LLaVA](https://github.com/haotian-liu/LLaVA) and [VideoChat](https://github.com/OpenGVLab/Ask-Anything)|
| pretrain-vicuna13b | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/pretrain-vicuna13b.pth) | Pre-trained on WebVid (2.5M video-caption pairs) and LLaVA-CC3M (595k image-caption pairs) |
| finetune-vicuna13b-v2 | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/finetune-vicuna13b-v2.pth) | Fine-tuned on the instruction-tuning data from [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4), [LLaVA](https://github.com/haotian-liu/LLaVA) and [VideoChat](https://github.com/OpenGVLab/Ask-Anything)|
| pretrain-ziya13b-zh | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/pretrain-ziya13b-zh.pth) | Pre-trained with Chinese LLM [Ziya-13B](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) |
| finetune-ziya13b-zh | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/finetune-ziya13b-zh.pth) | Fine-tuned on machine-translated [VideoChat](https://github.com/OpenGVLab/Ask-Anything) instruction-following dataset (in Chinese)|
| pretrain-billa7b-zh | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/pretrain-billa7b-zh.pth) | Pre-trained with Chinese LLM [BiLLA-7B](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) |
| finetune-billa7b-zh | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/finetune-billa7b-zh.pth) | Fine-tuned on machine-translated [VideoChat](https://github.com/OpenGVLab/Ask-Anything) instruction-following dataset (in Chinese) |
## Audio-Language Branch
| Checkpoint | Link | Note |
|:------------|-------------|-------------|
| pretrain-vicuna7b | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/pretrain_vicuna7b_audiobranch.pth) | Pre-trained on WebVid (2.5M video-caption pairs) and LLaVA-CC3M (595k image-caption pairs) |
| finetune-vicuna7b-v2 | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series/resolve/main/finetune_vicuna7b_audiobranch.pth) | Fine-tuned on the instruction-tuning data from [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4), [LLaVA](https://github.com/haotian-liu/LLaVA) and [VideoChat](https://github.com/OpenGVLab/Ask-Anything)|
## Usage
For launching the pre-trained Video-LLaMA on your own machine, please refer to our [github repo](https://github.com/DAMO-NLP-SG/Video-LLaMA).
|
hangeol/7
|
hangeol
| 2023-06-14T23:18:10Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T22:28:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/7
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
patrickvonplaten/wav2vec2-common_voice-tr-mms-demo
|
patrickvonplaten
| 2023-06-14T23:14:22Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"mms",
"tr",
"dataset:common_voice",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-14T20:52:57Z |
---
language:
- tr
license: cc-by-nc-4.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- mms
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-common_voice-tr-mms-demo-3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: COMMON_VOICE - TR
type: common_voice
config: tr
split: test
args: 'Config: tr, Training split: train+validation, Eval split: test'
metrics:
- name: Wer
type: wer
value: 0.2267388417934838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-mms-demo
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1532
- Wer: 0.2267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 0.1822 | 0.2605 |
| No log | 1.83 | 200 | 0.1620 | 0.2389 |
| No log | 2.75 | 300 | 0.1581 | 0.2318 |
| No log | 3.67 | 400 | 0.1535 | 0.2270 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Stevenhpy/unsup-simcse-bert-base-uncased-focal
|
Stevenhpy
| 2023-06-14T23:11:49Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-14T22:55:56Z |
---
pipeline_tag: sentence-similarity
---
## How to Get Started with the Model
please refer to https://github.com/puerrrr/Focal-InfoNCE
## Model Card Contact
pengyue@ualberta.ca
|
Stevenhpy/unsup-simcse-roberta-base-focal
|
Stevenhpy
| 2023-06-14T23:11:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-14T22:59:06Z |
---
pipeline_tag: sentence-similarity
---
## How to Get Started with the Model
please refer to https://github.com/puerrrr/Focal-InfoNCE
## Model Card Contact
pengyue@ualberta.ca
|
platzi/vit_model-juan_jose_cano
|
platzi
| 2023-06-14T22:42:21Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-07T17:35:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/platzi/platzi-vit-model-luis-reyes/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/platzi/platzi-vit-model-luis-reyes/resolve/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit_model-juan_jose_cano
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model-juan_jose_cano
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0568 | 3.85 | 500 | 0.0810 | 0.9774 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
grantprice/pythia-410m-deduped-finetuned-DND
|
grantprice
| 2023-06-14T22:20:49Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-09T23:55:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: pythia-410m-deduped-finetuned-DND
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-deduped-finetuned-DND
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 480 | 4.1250 |
| 2.4446 | 2.0 | 960 | 4.3426 |
| 1.5974 | 3.0 | 1440 | 4.7177 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_emb_com_qnli
|
gokuls
| 2023-06-14T21:37:06Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T16:22:16Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5053999633900788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7033 | 1.0 | 819 | 0.7265 | 0.5054 |
| 0.6988 | 2.0 | 1638 | 0.6937 | 0.4946 |
| 0.6969 | 3.0 | 2457 | 0.6934 | 0.4946 |
| 0.6951 | 4.0 | 3276 | 0.6945 | 0.4946 |
| 0.6944 | 5.0 | 4095 | 0.6963 | 0.5054 |
| 0.6944 | 6.0 | 4914 | 0.6946 | 0.5054 |
| 0.6945 | 7.0 | 5733 | 0.6959 | 0.4946 |
| 0.6943 | 8.0 | 6552 | 0.6931 | 0.5054 |
| 0.6938 | 9.0 | 7371 | 0.6937 | 0.5054 |
| 0.6939 | 10.0 | 8190 | 0.6944 | 0.4946 |
| 0.6936 | 11.0 | 9009 | 0.6940 | 0.4946 |
| 0.6937 | 12.0 | 9828 | 0.6933 | 0.4946 |
| 0.6937 | 13.0 | 10647 | 0.6987 | 0.4946 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
2weaks/bottle2
|
2weaks
| 2023-06-14T21:18:17Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T21:06:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bottle2 Dreambooth model trained by 2weaks with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
lucaordronneau/distilbert-base-uncased-finetuned-OCEAN-sentiment
|
lucaordronneau
| 2023-06-14T20:59:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T15:15:56Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
|
monke1/bunnio
|
monke1
| 2023-06-14T20:52:41Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T20:48:04Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bunnio Dreambooth model trained by monke1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.png)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Zaid/distilbert-base-uncased-finetuned-cola
|
Zaid
| 2023-06-14T20:47:47Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T20:41:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5471613867597194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7833
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5216 | 1.0 | 535 | 0.5186 | 0.4153 |
| 0.3502 | 2.0 | 1070 | 0.5054 | 0.5065 |
| 0.2402 | 3.0 | 1605 | 0.5903 | 0.5275 |
| 0.1843 | 4.0 | 2140 | 0.7420 | 0.5370 |
| 0.1347 | 5.0 | 2675 | 0.7833 | 0.5472 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Alfantin/alfantin-750-325
|
Alfantin
| 2023-06-14T20:40:23Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T20:28:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### alfantin-750-325 Dreambooth model trained by Alfantin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
NBRZ/bert-trainer-8b
|
NBRZ
| 2023-06-14T20:38:54Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T05:44:03Z |
---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-trainer-8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-trainer-8b
This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.5416 | 1.0 | 500 | 6.5207 |
| 6.393 | 1.99 | 1000 | 6.3903 |
| 6.2817 | 2.99 | 1500 | 6.3033 |
| 6.2274 | 3.98 | 2000 | 6.2671 |
| 6.179 | 4.98 | 2500 | 6.2431 |
| 6.1684 | 5.98 | 3000 | 6.2309 |
| 6.1244 | 6.97 | 3500 | 6.2114 |
| 6.0879 | 7.97 | 4000 | 6.1932 |
| 6.0643 | 8.96 | 4500 | 6.1791 |
| 6.0481 | 9.96 | 5000 | 6.1638 |
| 6.0231 | 10.96 | 5500 | 6.1581 |
| 5.9987 | 11.95 | 6000 | 6.1365 |
| 5.9989 | 12.95 | 6500 | 6.1194 |
| 5.9535 | 13.94 | 7000 | 6.1095 |
| 5.9139 | 14.94 | 7500 | 6.0890 |
| 5.8462 | 15.94 | 8000 | 6.0224 |
| 5.7689 | 16.93 | 8500 | 5.9266 |
| 5.6137 | 17.93 | 9000 | 5.7195 |
| 4.7163 | 18.92 | 9500 | 4.6131 |
| 4.0877 | 19.92 | 10000 | 4.0903 |
| 3.7832 | 20.92 | 10500 | 3.8340 |
| 3.6104 | 21.91 | 11000 | 3.6572 |
| 3.4615 | 22.91 | 11500 | 3.5278 |
| 3.3661 | 23.9 | 12000 | 3.4201 |
| 3.271 | 24.9 | 12500 | 3.3333 |
| 3.2179 | 25.9 | 13000 | 3.2720 |
| 3.1759 | 26.89 | 13500 | 3.2317 |
| 3.1419 | 27.89 | 14000 | 3.2006 |
| 3.1041 | 28.88 | 14500 | 3.1806 |
| 3.0836 | 29.88 | 15000 | 3.1693 |
| 3.0998 | 30.88 | 15500 | 3.1679 |
| 3.08 | 31.87 | 16000 | 3.1639 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LarryAIDraw/KMS_August_von_Parseval_loar
|
LarryAIDraw
| 2023-06-14T20:30:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-14T20:27:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/89994/kms-august-von-parsevalazur-lane
|
waspa/en_pipeline
|
waspa
| 2023-06-14T20:29:07Z | 4 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2023-06-14T20:24:35Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 1.0
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.2,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `RESTAURANT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 0.00 |
| `NER_LOSS` | 0.00 |
|
robinhad/open_llama_7b_uk
|
robinhad
| 2023-06-14T20:10:18Z | 0 | 0 | null |
[
"llama",
"text-generation",
"uk",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-14T19:57:06Z |
---
license: apache-2.0
language:
- uk
pipeline_tag: text-generation
tags:
- llama
---
This is a release of Open LLama, tuned for Ukrainian language.
Currently it contains adapter weights, possible subject to change in future.
|
deepgoyal19/disney_lora
|
deepgoyal19
| 2023-06-14T20:01:54Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-14T20:01:46Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/disney_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
mudra1710/clip-embeddings
|
mudra1710
| 2023-06-14T19:44:06Z | 0 | 0 | null |
[
"zero-shot-image-classification",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-06-12T17:45:53Z |
---
pipeline_tag: zero-shot-image-classification
---
|
gokuls/hBERTv1_new_pretrain_48_KD_qnli
|
gokuls
| 2023-06-14T19:40:51Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T17:26:06Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6009518579535054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6648
- Accuracy: 0.6010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6818 | 1.0 | 819 | 0.6669 | 0.5966 |
| 0.6689 | 2.0 | 1638 | 0.6732 | 0.5858 |
| 0.6675 | 3.0 | 2457 | 0.6721 | 0.5810 |
| 0.663 | 4.0 | 3276 | 0.6793 | 0.5832 |
| 0.66 | 5.0 | 4095 | 0.6663 | 0.5999 |
| 0.6574 | 6.0 | 4914 | 0.6648 | 0.6010 |
| 0.6591 | 7.0 | 5733 | 0.6781 | 0.5731 |
| 0.659 | 8.0 | 6552 | 0.6685 | 0.5951 |
| 0.6697 | 9.0 | 7371 | 0.6793 | 0.5792 |
| 0.6755 | 10.0 | 8190 | 0.6829 | 0.5698 |
| 0.6794 | 11.0 | 9009 | 0.6780 | 0.5773 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thackerhelik/ppo-CleanRL-LunarLander-v2
|
thackerhelik
| 2023-06-14T19:40:04Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-14T17:18:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 117.28 +/- 46.77
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 5000000
'torch_deterministic': True
'cuda': True
'capture_video': False
'num_envs': 4
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'thackerhelik/ppo-CleanRL-LunarLander-v2'
'batch_size': 4096
'minibatch_size': 1024}
```
|
gokuls/hBERTv1_new_pretrain_48_emb_com_qnli
|
gokuls
| 2023-06-14T19:36:42Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T16:17:21Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5983891634632985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6611
- Accuracy: 0.5984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6862 | 1.0 | 819 | 0.6995 | 0.5513 |
| 0.6677 | 2.0 | 1638 | 0.6624 | 0.5949 |
| 0.6553 | 3.0 | 2457 | 0.6611 | 0.5984 |
| 0.6426 | 4.0 | 3276 | 0.6866 | 0.5762 |
| 0.6244 | 5.0 | 4095 | 0.6751 | 0.5951 |
| 0.6136 | 6.0 | 4914 | 0.6903 | 0.5728 |
| 0.6036 | 7.0 | 5733 | 0.7188 | 0.5731 |
| 0.5855 | 8.0 | 6552 | 0.7175 | 0.5825 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dappradar/setfit-collections-multilabel
|
dappradar
| 2023-06-14T19:31:53Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T22:16:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dappradar/setfit-collections-multilabel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dappradar/setfit-collections-multilabel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
abulatk1n/distilbert-base-uncased-tweets-disaster
|
abulatk1n
| 2023-06-14T19:25:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T21:52:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-tweets-disaster
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-tweets-disaster
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6060
- Accuracy: 0.8189
- F1: 0.8180
- Precision: 0.7945
- Recall: 0.7484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1515 | 1.0 | 113 | 0.5913 | 0.8215 | 0.8197 | 0.8129 | 0.7290 |
| 0.1317 | 2.0 | 226 | 0.6060 | 0.8189 | 0.8180 | 0.7945 | 0.7484 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
leonelhs/faceparser
|
leonelhs
| 2023-06-14T19:10:37Z | 18 | 0 |
transformers
|
[
"transformers",
"vision",
"image-segmentation",
"makeup",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-06-07T21:57:30Z |
---
license: mit
tags:
- vision
- image-segmentation
- makeup
---
### This is an non official Face Parser model mirror.
Reasons of this copy is because AI models can be very fragmented, and its hard to keeping point to an official source.
Meanwhile this concern will be solved, the projects listed here, will be using this repo.
### [Face Shine](https://github.com/leonelhs/face-shine)
Face Shine Is a backend server for photo enhancement and restoration.
### [Super Face](https://github.com/leonelhs/SuperFace/)
Super Face is a Python QT frontend for Face Shine server.
<img src="https://drive.google.com/uc?export=view&id=1D7hpjQSlUkzfTba-E5Ul4Rb1c8lYkFj5"/>
<img src="https://drive.google.com/uc?export=view&id=1oKpJe-Ff3SeEekhGVRP1Ap3eIFqt0c8u"/>
|
cambridgeltl/SapBERT-from-PubMedBERT-fulltext
|
cambridgeltl
| 2023-06-14T19:03:02Z | 1,698,560 | 43 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"biomedical",
"lexical semantics",
"bionlp",
"biology",
"science",
"embedding",
"entity linking",
"en",
"arxiv:2010.11784",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- en
tags:
- biomedical
- lexical semantics
- bionlp
- biology
- science
- embedding
- entity linking
---
---
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-PubMedBERT
SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model.
### Expected input and output
The input should be a string of biomedical entity names, e.g., "covid infection" or "Hydroxychloroquine". The [CLS] embedding of the last layer is regarded as the output.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu-etal-2021-self,
title = "Self-Alignment Pretraining for Biomedical Entity Representations",
author = "Liu, Fangyu and
Shareghi, Ehsan and
Meng, Zaiqiao and
Basaldella, Marco and
Collier, Nigel",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
pages = "4228--4238",
abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.