modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DOOGLAK/Tagged_Uni_500v4_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T20:12:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni500v4_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T20:07:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni500v4_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_500v4_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni500v4_wikigold_split
type: tagged_uni500v4_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6813225466056982
- name: Recall
type: recall
value: 0.6430942895086321
- name: F1
type: f1
value: 0.6616567036720751
- name: Accuracy
type: accuracy
value: 0.9231136153593894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_500v4_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni500v4_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2629
- Precision: 0.6813
- Recall: 0.6431
- F1: 0.6617
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 182 | 0.2853 | 0.5326 | 0.4525 | 0.4893 | 0.8999 |
| No log | 2.0 | 364 | 0.2683 | 0.6492 | 0.5930 | 0.6198 | 0.9143 |
| 0.1134 | 3.0 | 546 | 0.2629 | 0.6813 | 0.6431 | 0.6617 | 0.9231 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_500v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T20:07:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni500v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T20:02:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni500v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_500v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni500v3_wikigold_split
type: tagged_uni500v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7143812709030101
- name: Recall
type: recall
value: 0.7115256495669554
- name: F1
type: f1
value: 0.7129506008010682
- name: Accuracy
type: accuracy
value: 0.9340035371870055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_500v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni500v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2350
- Precision: 0.7144
- Recall: 0.7115
- F1: 0.7130
- Accuracy: 0.9340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 172 | 0.2361 | 0.6056 | 0.5596 | 0.5817 | 0.9194 |
| No log | 2.0 | 344 | 0.2236 | 0.6872 | 0.6922 | 0.6897 | 0.9315 |
| 0.1011 | 3.0 | 516 | 0.2350 | 0.7144 | 0.7115 | 0.7130 | 0.9340 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
0x-YuAN/CL_1
|
0x-YuAN
| 2022-08-11T19:56:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"zh",
"dataset:yuan1729/autotrain-data-YuAN-lawthone-CL_facts_backTrans",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-11T18:47:58Z |
---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- yuan1729/autotrain-data-YuAN-lawthone-CL_facts_backTrans
co2_eq_emissions:
emissions: 151.97297148175758
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1241547318
- CO2 Emissions (in grams): 151.9730
## Validation Metrics
- Loss: 0.512
- Accuracy: 0.862
- Macro F1: 0.862
- Micro F1: 0.862
- Weighted F1: 0.862
- Macro Precision: 0.863
- Micro Precision: 0.862
- Weighted Precision: 0.863
- Macro Recall: 0.862
- Micro Recall: 0.862
- Weighted Recall: 0.862
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
DOOGLAK/Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T19:39:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni250v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T19:35:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni250v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni250v8_wikigold_split
type: tagged_uni250v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5548306927617273
- name: Recall
type: recall
value: 0.4939159292035398
- name: F1
type: f1
value: 0.5226042428675933
- name: Accuracy
type: accuracy
value: 0.8976334059696954
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3186
- Precision: 0.5548
- Recall: 0.4939
- F1: 0.5226
- Accuracy: 0.8976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.4132 | 0.3646 | 0.2008 | 0.2590 | 0.8504 |
| No log | 2.0 | 190 | 0.2983 | 0.5077 | 0.4552 | 0.4800 | 0.8977 |
| No log | 3.0 | 285 | 0.3186 | 0.5548 | 0.4939 | 0.5226 | 0.8976 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_250v6_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T19:29:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni250v6_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T19:23:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni250v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_250v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni250v6_wikigold_split
type: tagged_uni250v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5571526351813826
- name: Recall
type: recall
value: 0.45730337078651684
- name: F1
type: f1
value: 0.5023141005862387
- name: Accuracy
type: accuracy
value: 0.8952912645884908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_250v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3080
- Precision: 0.5572
- Recall: 0.4573
- F1: 0.5023
- Accuracy: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.3505 | 0.3004 | 0.1817 | 0.2265 | 0.8649 |
| No log | 2.0 | 144 | 0.2989 | 0.5217 | 0.4219 | 0.4665 | 0.8931 |
| No log | 3.0 | 216 | 0.3080 | 0.5572 | 0.4573 | 0.5023 | 0.8953 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T19:23:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni250v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T19:17:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni250v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni250v5_wikigold_split
type: tagged_uni250v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5808346213292117
- name: Recall
type: recall
value: 0.5341102899374645
- name: F1
type: f1
value: 0.5564934103361469
- name: Accuracy
type: accuracy
value: 0.9006217563331792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3324
- Precision: 0.5808
- Recall: 0.5341
- F1: 0.5565
- Accuracy: 0.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 99 | 0.4305 | 0.3110 | 0.2149 | 0.2542 | 0.8533 |
| No log | 2.0 | 198 | 0.3340 | 0.5449 | 0.4935 | 0.5179 | 0.8956 |
| No log | 3.0 | 297 | 0.3324 | 0.5808 | 0.5341 | 0.5565 | 0.9006 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
roscoyoon/distilbert-base-uncased-finetuned
|
roscoyoon
| 2022-08-11T19:07:34Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-11T08:40:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7734
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2955 | 1.0 | 318 | 3.2914 | 0.7452 |
| 2.6342 | 2.0 | 636 | 1.8815 | 0.8313 |
| 1.5504 | 3.0 | 954 | 1.1547 | 0.8952 |
| 1.0151 | 4.0 | 1272 | 0.8580 | 0.9113 |
| 0.7936 | 5.0 | 1590 | 0.7734 | 0.9184 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T19:00:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni250v1_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:55:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni250v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni250v1_wikigold_split
type: tagged_uni250v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5971956660293181
- name: Recall
type: recall
value: 0.5290796160361377
- name: F1
type: f1
value: 0.5610778443113772
- name: Accuracy
type: accuracy
value: 0.906793008840565
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3057
- Precision: 0.5972
- Recall: 0.5291
- F1: 0.5611
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 87 | 0.3972 | 0.2749 | 0.2081 | 0.2369 | 0.8625 |
| No log | 2.0 | 174 | 0.2895 | 0.5545 | 0.5054 | 0.5288 | 0.9059 |
| No log | 3.0 | 261 | 0.3057 | 0.5972 | 0.5291 | 0.5611 | 0.9068 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
andres-hsn/Reinforce-AndresV0
|
andres-hsn
| 2022-08-11T18:47:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T18:42:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-AndresV0
results:
- metrics:
- type: mean_reward
value: 64.50 +/- 5.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
athairus/xlm-roberta-base-finetuned-panx-de
|
athairus
| 2022-08-11T18:37:59Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:28:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8663101604278075
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2581 | 1.0 | 525 | 0.1690 | 0.8303 |
| 0.1305 | 2.0 | 1050 | 0.1352 | 0.8484 |
| 0.0839 | 3.0 | 1575 | 0.1339 | 0.8663 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Petros89/bert-finetuned-squad
|
Petros89
| 2022-08-11T18:30:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-03T14:56:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.7.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T18:26:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:22:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v5_wikigold_split
type: tagged_uni100v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.27475592747559274
- name: Recall
type: recall
value: 0.20112302194997447
- name: F1
type: f1
value: 0.2322428529325081
- name: Accuracy
type: accuracy
value: 0.8489666875886277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4479
- Precision: 0.2748
- Recall: 0.2011
- F1: 0.2322
- Accuracy: 0.8490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.4908 | 0.2544 | 0.1445 | 0.1843 | 0.8292 |
| No log | 2.0 | 78 | 0.4703 | 0.2611 | 0.1881 | 0.2187 | 0.8437 |
| No log | 3.0 | 117 | 0.4479 | 0.2748 | 0.2011 | 0.2322 | 0.8490 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T18:21:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v4_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:16:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v4_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v4_wikigold_split
type: tagged_uni100v4_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.25279187817258886
- name: Recall
type: recall
value: 0.19148936170212766
- name: F1
type: f1
value: 0.2179113185530922
- name: Accuracy
type: accuracy
value: 0.8640945027509362
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v4_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3691
- Precision: 0.2528
- Recall: 0.1915
- F1: 0.2179
- Accuracy: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 34 | 0.5215 | 0.1087 | 0.0026 | 0.0050 | 0.7980 |
| No log | 2.0 | 68 | 0.3908 | 0.2356 | 0.1515 | 0.1844 | 0.8527 |
| No log | 3.0 | 102 | 0.3691 | 0.2528 | 0.1915 | 0.2179 | 0.8641 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_100v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T18:15:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:10:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v3_wikigold_split
type: tagged_uni100v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.27637540453074433
- name: Recall
type: recall
value: 0.10801922590437642
- name: F1
type: f1
value: 0.15532921062204438
- name: Accuracy
type: accuracy
value: 0.8105687105062148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4884
- Precision: 0.2764
- Recall: 0.1080
- F1: 0.1553
- Accuracy: 0.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6238 | 0.2 | 0.0089 | 0.0170 | 0.7822 |
| No log | 2.0 | 52 | 0.5210 | 0.2497 | 0.0587 | 0.0950 | 0.7971 |
| No log | 3.0 | 78 | 0.4884 | 0.2764 | 0.1080 | 0.1553 | 0.8106 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T18:09:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T18:04:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v2_wikigold_split
type: tagged_uni100v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.2783229259589652
- name: Recall
type: recall
value: 0.15885947046843177
- name: F1
type: f1
value: 0.20226904376012964
- name: Accuracy
type: accuracy
value: 0.8411943180251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4048
- Precision: 0.2783
- Recall: 0.1589
- F1: 0.2023
- Accuracy: 0.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.4802 | 0.3667 | 0.0784 | 0.1292 | 0.8125 |
| No log | 2.0 | 78 | 0.4028 | 0.2745 | 0.1540 | 0.1973 | 0.8412 |
| No log | 3.0 | 117 | 0.4048 | 0.2783 | 0.1589 | 0.2023 | 0.8412 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_100v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T18:03:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v1_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:59:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v1_wikigold_split
type: tagged_uni100v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.23641213737912636
- name: Recall
type: recall
value: 0.18425155925155925
- name: F1
type: f1
value: 0.20709799912370383
- name: Accuracy
type: accuracy
value: 0.8493674748280798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4031
- Precision: 0.2364
- Recall: 0.1843
- F1: 0.2071
- Accuracy: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.4906 | 0.1526 | 0.0580 | 0.0840 | 0.8187 |
| No log | 2.0 | 78 | 0.4213 | 0.2321 | 0.1736 | 0.1986 | 0.8456 |
| No log | 3.0 | 117 | 0.4031 | 0.2364 | 0.1843 | 0.2071 | 0.8494 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:58:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni100v0_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:53:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v0_wikigold_split
type: tagged_uni100v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.1801752464403067
- name: Recall
type: recall
value: 0.08303886925795052
- name: F1
type: f1
value: 0.11368348306841741
- name: Accuracy
type: accuracy
value: 0.8143372512510183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Precision: 0.1802
- Recall: 0.0830
- F1: 0.1137
- Accuracy: 0.8143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.5687 | 0.0882 | 0.0015 | 0.0030 | 0.7791 |
| No log | 2.0 | 66 | 0.5410 | 0.1319 | 0.0270 | 0.0448 | 0.7946 |
| No log | 3.0 | 99 | 0.4601 | 0.1802 | 0.0830 | 0.1137 | 0.8143 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:52:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:47:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v9_wikigold_split
type: tagged_uni50v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5
- name: Recall
type: recall
value: 0.000243605359317905
- name: F1
type: f1
value: 0.00048697345994643296
- name: Accuracy
type: accuracy
value: 0.7843220814175171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6233
- Precision: 0.5
- Recall: 0.0002
- F1: 0.0005
- Accuracy: 0.7843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.7531 | 0.0 | 0.0 | 0.0 | 0.7788 |
| No log | 2.0 | 32 | 0.6599 | 0.5 | 0.0002 | 0.0005 | 0.7823 |
| No log | 3.0 | 48 | 0.6233 | 0.5 | 0.0002 | 0.0005 | 0.7843 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:47:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:41:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v8_wikigold_split
type: tagged_uni50v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.15460526315789475
- name: Recall
type: recall
value: 0.023016650342801176
- name: F1
type: f1
value: 0.04006820119352089
- name: Accuracy
type: accuracy
value: 0.7925892757192432
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5527
- Precision: 0.1546
- Recall: 0.0230
- F1: 0.0401
- Accuracy: 0.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 19 | 0.6981 | 0.0 | 0.0 | 0.0 | 0.7786 |
| No log | 2.0 | 38 | 0.5851 | 0.1290 | 0.0049 | 0.0094 | 0.7832 |
| No log | 3.0 | 57 | 0.5527 | 0.1546 | 0.0230 | 0.0401 | 0.7926 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:41:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v7_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:37:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v7_wikigold_split
type: tagged_uni50v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.7783445190156599
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6772
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 12 | 0.7850 | 0.0 | 0.0 | 0.0 | 0.7783 |
| No log | 2.0 | 24 | 0.7010 | 0.0 | 0.0 | 0.0 | 0.7783 |
| No log | 3.0 | 36 | 0.6772 | 0.0 | 0.0 | 0.0 | 0.7783 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v6_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:36:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v6_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:31:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v6_wikigold_split
type: tagged_uni50v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.7775983130313839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6142
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 17 | 0.7369 | 0.0 | 0.0 | 0.0 | 0.7773 |
| No log | 2.0 | 34 | 0.6359 | 0.0 | 0.0 | 0.0 | 0.7773 |
| No log | 3.0 | 51 | 0.6142 | 0.0 | 0.0 | 0.0 | 0.7776 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:20:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:14:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v3_wikigold_split
type: tagged_uni50v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.14766839378238342
- name: Recall
type: recall
value: 0.013980868285504048
- name: F1
type: f1
value: 0.025543356486668164
- name: Accuracy
type: accuracy
value: 0.7865287304621612
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5987
- Precision: 0.1477
- Recall: 0.0140
- F1: 0.0255
- Accuracy: 0.7865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.7260 | 0.0 | 0.0 | 0.0 | 0.7789 |
| No log | 2.0 | 28 | 0.6256 | 0.1436 | 0.0140 | 0.0255 | 0.7865 |
| No log | 3.0 | 42 | 0.5987 | 0.1477 | 0.0140 | 0.0255 | 0.7865 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T17:14:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_uni50v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T17:08:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v2_wikigold_split
type: tagged_uni50v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.08
- name: Recall
type: recall
value: 0.0004884004884004884
- name: F1
type: f1
value: 0.0009708737864077671
- name: Accuracy
type: accuracy
value: 0.7850352033723486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6159
- Precision: 0.08
- Recall: 0.0005
- F1: 0.0010
- Accuracy: 0.7850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.7399 | 0.0 | 0.0 | 0.0 | 0.7779 |
| No log | 2.0 | 32 | 0.6545 | 0.0833 | 0.0002 | 0.0005 | 0.7817 |
| No log | 3.0 | 48 | 0.6159 | 0.08 | 0.0005 | 0.0010 | 0.7850 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Yao92/distilbert-base-uncased-finetuned-cola
|
Yao92
| 2022-08-11T17:12:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-11T17:01:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5303243504311796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8278
- Matthews Correlation: 0.5303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5225 | 1.0 | 535 | 0.5299 | 0.3973 |
| 0.3485 | 2.0 | 1070 | 0.5279 | 0.4975 |
| 0.2375 | 3.0 | 1605 | 0.5637 | 0.5275 |
| 0.1832 | 4.0 | 2140 | 0.7995 | 0.5249 |
| 0.1301 | 5.0 | 2675 | 0.8278 | 0.5303 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
waynedsouza/distilbert-base-uncased-gc-art2e
|
waynedsouza
| 2022-08-11T16:45:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-11T16:39:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-gc-art2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-gc-art2e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0863
- Accuracy: 0.982
- F1: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0875 | 1.0 | 32 | 0.0874 | 0.982 | 0.9731 |
| 0.0711 | 2.0 | 64 | 0.0863 | 0.982 | 0.9731 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_One_500v7_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:45:22Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v7_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T16:40:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v7_wikigold_split
type: tagged_one500v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6700655498907502
- name: Recall
type: recall
value: 0.6767193821257815
- name: F1
type: f1
value: 0.6733760292772187
- name: Accuracy
type: accuracy
value: 0.9237216043353603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2679
- Precision: 0.6701
- Recall: 0.6767
- F1: 0.6734
- Accuracy: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 156 | 0.3336 | 0.5893 | 0.4855 | 0.5324 | 0.8955 |
| No log | 2.0 | 312 | 0.2580 | 0.6617 | 0.6561 | 0.6589 | 0.9215 |
| No log | 3.0 | 468 | 0.2679 | 0.6701 | 0.6767 | 0.6734 | 0.9237 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:39:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v6_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T16:33:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v6_wikigold_split
type: tagged_one500v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6866690621631333
- name: Recall
type: recall
value: 0.6719409282700421
- name: F1
type: f1
value: 0.679225164385996
- name: Accuracy
type: accuracy
value: 0.9239838169290094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2690
- Precision: 0.6867
- Recall: 0.6719
- F1: 0.6792
- Accuracy: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 189 | 0.2819 | 0.6009 | 0.5352 | 0.5661 | 0.9105 |
| No log | 2.0 | 378 | 0.2614 | 0.6743 | 0.6406 | 0.6571 | 0.9201 |
| 0.11 | 3.0 | 567 | 0.2690 | 0.6867 | 0.6719 | 0.6792 | 0.9240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_500v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:33:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T16:27:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v5_wikigold_split
type: tagged_one500v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6984998170508598
- name: Recall
type: recall
value: 0.6817857142857143
- name: F1
type: f1
value: 0.690041568769203
- name: Accuracy
type: accuracy
value: 0.9276886906197251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2523
- Precision: 0.6985
- Recall: 0.6818
- F1: 0.6900
- Accuracy: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 161 | 0.2446 | 0.5625 | 0.5493 | 0.5558 | 0.9167 |
| No log | 2.0 | 322 | 0.2487 | 0.6894 | 0.6557 | 0.6722 | 0.9237 |
| No log | 3.0 | 483 | 0.2523 | 0.6985 | 0.6818 | 0.6900 | 0.9277 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
QianMolloy/ppo-LunarLander-v2
|
QianMolloy
| 2022-08-11T16:23:15Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T16:22:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 250.97 +/- 23.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DOOGLAK/Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:21:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T16:16:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v3_wikigold_split
type: tagged_one500v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.697499143542309
- name: Recall
type: recall
value: 0.6782145236508994
- name: F1
type: f1
value: 0.6877216686370546
- name: Accuracy
type: accuracy
value: 0.9245400105495051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2659
- Precision: 0.6975
- Recall: 0.6782
- F1: 0.6877
- Accuracy: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 175 | 0.2990 | 0.5405 | 0.4600 | 0.4970 | 0.9007 |
| No log | 2.0 | 350 | 0.2789 | 0.6837 | 0.6236 | 0.6523 | 0.9157 |
| 0.1081 | 3.0 | 525 | 0.2659 | 0.6975 | 0.6782 | 0.6877 | 0.9245 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_500v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:09:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v1_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T16:03:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v1_wikigold_split
type: tagged_one500v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7131782945736435
- name: Recall
type: recall
value: 0.6693121693121693
- name: F1
type: f1
value: 0.690549300580007
- name: Accuracy
type: accuracy
value: 0.9232131948686622
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2834
- Precision: 0.7132
- Recall: 0.6693
- F1: 0.6905
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 164 | 0.2830 | 0.4758 | 0.4064 | 0.4384 | 0.9032 |
| No log | 2.0 | 328 | 0.2631 | 0.6901 | 0.6716 | 0.6807 | 0.9232 |
| No log | 3.0 | 492 | 0.2834 | 0.7132 | 0.6693 | 0.6905 | 0.9232 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T16:03:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v0_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:57:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v0_wikigold_split
type: tagged_one500v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6663055254604551
- name: Recall
type: recall
value: 0.683839881393625
- name: F1
type: f1
value: 0.6749588439729285
- name: Accuracy
type: accuracy
value: 0.9260204081632653
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2679
- Precision: 0.6663
- Recall: 0.6838
- F1: 0.6750
- Accuracy: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 173 | 0.2827 | 0.5972 | 0.5556 | 0.5757 | 0.9079 |
| No log | 2.0 | 346 | 0.2668 | 0.6442 | 0.6383 | 0.6412 | 0.9204 |
| 0.1142 | 3.0 | 519 | 0.2679 | 0.6663 | 0.6838 | 0.6750 | 0.9260 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:57:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:51:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v9_wikigold_split
type: tagged_one250v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5794920037629351
- name: Recall
type: recall
value: 0.5334872979214781
- name: F1
type: f1
value: 0.5555388546520367
- name: Accuracy
type: accuracy
value: 0.9034831230122089
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3012
- Precision: 0.5795
- Recall: 0.5335
- F1: 0.5555
- Accuracy: 0.9035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 90 | 0.3614 | 0.2860 | 0.1969 | 0.2332 | 0.8576 |
| No log | 2.0 | 180 | 0.3317 | 0.5186 | 0.4596 | 0.4873 | 0.8924 |
| No log | 3.0 | 270 | 0.3012 | 0.5795 | 0.5335 | 0.5555 | 0.9035 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:51:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:45:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v8_wikigold_split
type: tagged_one250v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5351851851851852
- name: Recall
type: recall
value: 0.4795353982300885
- name: F1
type: f1
value: 0.5058343057176197
- name: Accuracy
type: accuracy
value: 0.8947195053970506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3389
- Precision: 0.5352
- Recall: 0.4795
- F1: 0.5058
- Accuracy: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.4305 | 0.3497 | 0.1814 | 0.2389 | 0.8488 |
| No log | 2.0 | 190 | 0.3469 | 0.4995 | 0.4281 | 0.4611 | 0.8875 |
| No log | 3.0 | 285 | 0.3389 | 0.5352 | 0.4795 | 0.5058 | 0.8947 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:45:15Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v7_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:40:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v7_wikigold_split
type: tagged_one250v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5509259259259259
- name: Recall
type: recall
value: 0.4675834970530452
- name: F1
type: f1
value: 0.5058448459086079
- name: Accuracy
type: accuracy
value: 0.8893517705222476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3809
- Precision: 0.5509
- Recall: 0.4676
- F1: 0.5058
- Accuracy: 0.8894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 87 | 0.4450 | 0.1912 | 0.1047 | 0.1353 | 0.8278 |
| No log | 2.0 | 174 | 0.3903 | 0.4992 | 0.4176 | 0.4548 | 0.8820 |
| No log | 3.0 | 261 | 0.3809 | 0.5509 | 0.4676 | 0.5058 | 0.8894 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:33:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:27:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v5_wikigold_split
type: tagged_one250v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5500158780565259
- name: Recall
type: recall
value: 0.4923251847640705
- name: F1
type: f1
value: 0.5195740212989352
- name: Accuracy
type: accuracy
value: 0.8949951184420122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Precision: 0.5500
- Recall: 0.4923
- F1: 0.5196
- Accuracy: 0.8950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 91 | 0.3950 | 0.2800 | 0.2138 | 0.2424 | 0.8558 |
| No log | 2.0 | 182 | 0.3633 | 0.4938 | 0.4306 | 0.4601 | 0.8887 |
| No log | 3.0 | 273 | 0.3623 | 0.5500 | 0.4923 | 0.5196 | 0.8950 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:16:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:10:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v2_wikigold_split
type: tagged_one250v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5859220092531394
- name: Recall
type: recall
value: 0.5074413279908414
- name: F1
type: f1
value: 0.5438650306748466
- name: Accuracy
type: accuracy
value: 0.8979617609173338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3573
- Precision: 0.5859
- Recall: 0.5074
- F1: 0.5439
- Accuracy: 0.8980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 93 | 0.3884 | 0.2899 | 0.2006 | 0.2371 | 0.8583 |
| No log | 2.0 | 186 | 0.3502 | 0.5467 | 0.4705 | 0.5058 | 0.8937 |
| No log | 3.0 | 279 | 0.3573 | 0.5859 | 0.5074 | 0.5439 | 0.8980 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_250v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:10:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v1_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T15:05:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v1_wikigold_split
type: tagged_one250v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5896180215475024
- name: Recall
type: recall
value: 0.5098814229249012
- name: F1
type: f1
value: 0.5468584405753218
- name: Accuracy
type: accuracy
value: 0.8999339498018494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3321
- Precision: 0.5896
- Recall: 0.5099
- F1: 0.5469
- Accuracy: 0.8999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 89 | 0.3518 | 0.3537 | 0.2945 | 0.3214 | 0.8761 |
| No log | 2.0 | 178 | 0.3115 | 0.5583 | 0.4867 | 0.5201 | 0.8974 |
| No log | 3.0 | 267 | 0.3321 | 0.5896 | 0.5099 | 0.5469 | 0.8999 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
huggingtweets/henryfarrell
|
huggingtweets
| 2022-08-11T15:08:57Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-11T15:08:06Z |
---
language: en
thumbnail: http://www.huggingtweets.com/henryfarrell/1660230533136/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1161630886963683328/SgNq1g_6_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Henry Farrell</div>
<div style="text-align: center; font-size: 14px;">@henryfarrell</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Henry Farrell.
| Data | Henry Farrell |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 1491 |
| Short tweets | 120 |
| Tweets kept | 1636 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3s3w7i53/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @henryfarrell's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/aifgbb0k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/aifgbb0k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/henryfarrell')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DOOGLAK/Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T15:04:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one250v0_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:59:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v0_wikigold_split
type: tagged_one250v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5125421190565331
- name: Recall
type: recall
value: 0.3694009713977334
- name: F1
type: f1
value: 0.4293554963148816
- name: Accuracy
type: accuracy
value: 0.8786972744569918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4287
- Precision: 0.5125
- Recall: 0.3694
- F1: 0.4294
- Accuracy: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 96 | 0.4352 | 0.3056 | 0.1692 | 0.2178 | 0.8448 |
| No log | 2.0 | 192 | 0.3881 | 0.4394 | 0.3295 | 0.3766 | 0.8773 |
| No log | 3.0 | 288 | 0.4287 | 0.5125 | 0.3694 | 0.4294 | 0.8787 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_100v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:58:20Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:53:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v9_wikigold_split
type: tagged_one100v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.3040441176470588
- name: Recall
type: recall
value: 0.21319927816447537
- name: F1
type: f1
value: 0.2506440369752993
- name: Accuracy
type: accuracy
value: 0.8538912172644546
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4255
- Precision: 0.3040
- Recall: 0.2132
- F1: 0.2506
- Accuracy: 0.8539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 40 | 0.5167 | 0.1936 | 0.0376 | 0.0630 | 0.8004 |
| No log | 2.0 | 80 | 0.4406 | 0.2405 | 0.1441 | 0.1802 | 0.8385 |
| No log | 3.0 | 120 | 0.4255 | 0.3040 | 0.2132 | 0.2506 | 0.8539 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
vish88/xlnet-base-mnli-orgs-finetuned1
|
vish88
| 2022-08-11T14:53:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-05T00:45:17Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-mnli-orgs-finetuned1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-mnli-orgs-finetuned1
This model is a fine-tuned version of [clevrly/xlnet-base-mnli-finetuned](https://huggingface.co/clevrly/xlnet-base-mnli-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1542
- F1: 0.6957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2719 | 1.0 | 1462 | 0.2841 | 0.0 |
| 0.3042 | 2.0 | 2924 | 0.2664 | 0.4324 |
| 0.1366 | 3.0 | 4386 | 0.1408 | 0.6452 |
| 0.1149 | 4.0 | 5848 | 0.1387 | 0.6866 |
| 0.0986 | 5.0 | 7310 | 0.1542 | 0.6957 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_One_100v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:52:30Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:47:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v8_wikigold_split
type: tagged_one100v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.18848653667595172
- name: Recall
type: recall
value: 0.0498159509202454
- name: F1
type: f1
value: 0.07880434782608696
- name: Accuracy
type: accuracy
value: 0.8035317050796927
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5649
- Precision: 0.1885
- Recall: 0.0498
- F1: 0.0788
- Accuracy: 0.8035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 37 | 0.7042 | 0.0 | 0.0 | 0.0 | 0.7750 |
| No log | 2.0 | 74 | 0.5744 | 0.1628 | 0.0243 | 0.0423 | 0.7930 |
| No log | 3.0 | 111 | 0.5649 | 0.1885 | 0.0498 | 0.0788 | 0.8035 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_100v7_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:46:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v7_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:41:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v7_wikigold_split
type: tagged_one100v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.2402332361516035
- name: Recall
type: recall
value: 0.10690192008303062
- name: F1
type: f1
value: 0.14796193212425932
- name: Accuracy
type: accuracy
value: 0.817534449274022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5232
- Precision: 0.2402
- Recall: 0.1069
- F1: 0.1480
- Accuracy: 0.8175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6129 | 0.0647 | 0.0023 | 0.0045 | 0.7840 |
| No log | 2.0 | 52 | 0.5177 | 0.2035 | 0.0807 | 0.1156 | 0.8130 |
| No log | 3.0 | 78 | 0.5232 | 0.2402 | 0.1069 | 0.1480 | 0.8175 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_100v6_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:40:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v6_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:35:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v6_wikigold_split
type: tagged_one100v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.244097995545657
- name: Recall
type: recall
value: 0.13908629441624365
- name: F1
type: f1
value: 0.17720291026677445
- name: Accuracy
type: accuracy
value: 0.8258844149255108
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5346
- Precision: 0.2441
- Recall: 0.1391
- F1: 0.1772
- Accuracy: 0.8259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 47 | 0.5840 | 0.1614 | 0.0454 | 0.0709 | 0.8044 |
| No log | 2.0 | 94 | 0.5226 | 0.2489 | 0.1312 | 0.1718 | 0.8256 |
| No log | 3.0 | 141 | 0.5346 | 0.2441 | 0.1391 | 0.1772 | 0.8259 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
VietAI/vi-bartflax-large-news
|
VietAI
| 2022-08-11T14:40:17Z | 36 | 1 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"bart",
"text2text-generation",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-09T00:33:59Z |
---
language: vi
---
# BART-large on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@gmail.com (Dương).
### Important note
When finetuning this model on downstream tasks (e.g. text summarization), ensure that your label has the form of `tokenizer.bos_token + target + tokenizer.eos_token` before tokenizing.
|
Cube/ShijiBERT
|
Cube
| 2022-08-11T14:39:40Z | 2 | 0 |
transformers
|
[
"transformers",
"bert",
"fill-mask",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-11T14:01:58Z |
---
language:
- "zh"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "[MASK]太元中,武陵人捕鱼为业。"
- text: "问征夫以前路,恨晨光之[MASK]微。"
- text: "浔阳江头夜送客,枫叶[MASK]花秋瑟瑟。"
---
|
harish/t5-e2e-2epochs-lr1e4-alpha0-5
|
harish
| 2022-08-11T14:22:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-11T14:17:21Z |
---
license: cc-by-nc-sa-4.0
---
|
DOOGLAK/Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:19:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:13:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v2_wikigold_split
type: tagged_one100v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.29022988505747127
- name: Recall
type: recall
value: 0.12856415478615071
- name: F1
type: f1
value: 0.17819336626676077
- name: Accuracy
type: accuracy
value: 0.833149450650485
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4407
- Precision: 0.2902
- Recall: 0.1286
- F1: 0.1782
- Accuracy: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 40 | 0.5318 | 0.2817 | 0.0204 | 0.0380 | 0.7978 |
| No log | 2.0 | 80 | 0.4431 | 0.2932 | 0.1146 | 0.1647 | 0.8291 |
| No log | 3.0 | 120 | 0.4407 | 0.2902 | 0.1286 | 0.1782 | 0.8331 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
miguelwon/xlm-roberta-base-finetuned-panx-de
|
miguelwon
| 2022-08-11T14:08:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:47:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8615332274892267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1375
- F1: 0.8615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 525 | 0.1795 | 0.8092 |
| No log | 2.0 | 1050 | 0.1360 | 0.8490 |
| No log | 3.0 | 1575 | 0.1375 | 0.8615 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.13.0.dev20220808
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:07:39Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one100v0_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T14:02:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v0_wikigold_split
type: tagged_one100v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.16896060749881348
- name: Recall
type: recall
value: 0.08985360928823827
- name: F1
type: f1
value: 0.11731751524139067
- name: Accuracy
type: accuracy
value: 0.8183405097172117
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4700
- Precision: 0.1690
- Recall: 0.0899
- F1: 0.1173
- Accuracy: 0.8183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.5975 | 0.1034 | 0.0015 | 0.0030 | 0.7790 |
| No log | 2.0 | 64 | 0.4756 | 0.1607 | 0.0765 | 0.1036 | 0.8137 |
| No log | 3.0 | 96 | 0.4700 | 0.1690 | 0.0899 | 0.1173 | 0.8183 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Tagged_One_50v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T14:02:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one50v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T13:57:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v9_wikigold_split
type: tagged_one50v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5
- name: Recall
type: recall
value: 0.000243605359317905
- name: F1
type: f1
value: 0.00048697345994643296
- name: Accuracy
type: accuracy
value: 0.7806885723898171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6504
- Precision: 0.5
- Recall: 0.0002
- F1: 0.0005
- Accuracy: 0.7807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.7521 | 0.0 | 0.0 | 0.0 | 0.7782 |
| No log | 2.0 | 32 | 0.6778 | 1.0 | 0.0002 | 0.0005 | 0.7797 |
| No log | 3.0 | 48 | 0.6504 | 0.5 | 0.0002 | 0.0005 | 0.7807 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
srcocotero/bert-large-qa
|
srcocotero
| 2022-08-11T13:46:18Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-07T14:48:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-qa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/Tagged_One_50v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T13:40:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one50v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T13:36:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v5_wikigold_split
type: tagged_one50v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.11643835616438356
- name: Recall
type: recall
value: 0.008254430687059966
- name: F1
type: f1
value: 0.015416005440943096
- name: Accuracy
type: accuracy
value: 0.7840127288617977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6440
- Precision: 0.1164
- Recall: 0.0083
- F1: 0.0154
- Accuracy: 0.7840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.7768 |
| No log | 2.0 | 52 | 0.6426 | 0.0855 | 0.0024 | 0.0047 | 0.7799 |
| No log | 3.0 | 78 | 0.6440 | 0.1164 | 0.0083 | 0.0154 | 0.7840 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
mrm8488/Worm_v2
|
mrm8488
| 2022-08-11T13:35:34Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2022-08-11T13:35:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: mrm8488/Worm_v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DOOGLAK/Tagged_One_50v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T13:25:41Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one50v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T13:20:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v2_wikigold_split
type: tagged_one50v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.125
- name: Recall
type: recall
value: 0.0007326007326007326
- name: F1
type: f1
value: 0.0014566642388929353
- name: Accuracy
type: accuracy
value: 0.7835104713215839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6200
- Precision: 0.125
- Recall: 0.0007
- F1: 0.0015
- Accuracy: 0.7835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 18 | 0.7424 | 0.0 | 0.0 | 0.0 | 0.7776 |
| No log | 2.0 | 36 | 0.6479 | 0.0909 | 0.0002 | 0.0005 | 0.7819 |
| No log | 3.0 | 54 | 0.6200 | 0.125 | 0.0007 | 0.0015 | 0.7835 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
GEEKLEO/FINBERT
|
GEEKLEO
| 2022-08-11T13:20:35Z | 91 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-08-11T07:13:02Z |
仅方便自用,原模型来源为Github上开源的模型:valuesimplex/FinBERT,地址为:https://github.com/valuesimplex/FinBERT
这是一个金融领域大规模语料上训练的开源中文BERT预训练模型。
|
DOOGLAK/Tagged_One_50v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T13:20:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one50v1_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T13:15:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v1_wikigold_split
type: tagged_one50v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.19072164948453607
- name: Recall
type: recall
value: 0.02711284807034685
- name: F1
type: f1
value: 0.04747647562018819
- name: Accuracy
type: accuracy
value: 0.7925038291737995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6207
- Precision: 0.1907
- Recall: 0.0271
- F1: 0.0475
- Accuracy: 0.7925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6635 | 0.0 | 0.0 | 0.0 | 0.7775 |
| No log | 2.0 | 52 | 0.5963 | 0.1820 | 0.0208 | 0.0373 | 0.7906 |
| No log | 3.0 | 78 | 0.6207 | 0.1907 | 0.0271 | 0.0475 | 0.7925 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
ClementRomac/TA_Random_SAC_chimpanzee_easy_parkour_s15
|
ClementRomac
| 2022-08-11T13:16:58Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"region:us"
] |
reinforcement-learning
| 2022-08-11T13:15:54Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark. It was trained on the easy task space of the Parkour environment with water removed.*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour (easy + no water)'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'climbing_profile_chimpanzee'}
```
|
ClementRomac/TA_Random_SAC_chimpanzee_easy_parkour_s2
|
ClementRomac
| 2022-08-11T13:13:53Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"region:us"
] |
reinforcement-learning
| 2022-08-11T13:10:36Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark. It was trained on the easy task space of the Parkour environment with water removed.*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour (easy + no water)'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'climbing_profile_chimpanzee'}
```
|
carted-nlp/categorization-finetuned-20220721-164940-distilled-20220811-074207
|
carted-nlp
| 2022-08-11T13:09:13Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-11T07:43:56Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: categorization-finetuned-20220721-164940-distilled-20220811-074207
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# categorization-finetuned-20220721-164940-distilled-20220811-074207
This model is a fine-tuned version of [carted-nlp/categorization-finetuned-20220721-164940](https://huggingface.co/carted-nlp/categorization-finetuned-20220721-164940) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Accuracy: 0.8771
- F1: 0.8763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 96
- seed: 314
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1500
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.5644 | 0.56 | 2500 | 0.2739 | 0.7822 | 0.7774 |
| 0.2658 | 1.12 | 5000 | 0.2288 | 0.8159 | 0.8127 |
| 0.2307 | 1.69 | 7500 | 0.2082 | 0.8298 | 0.8273 |
| 0.2126 | 2.25 | 10000 | 0.1970 | 0.8389 | 0.8370 |
| 0.2012 | 2.81 | 12500 | 0.1888 | 0.8450 | 0.8433 |
| 0.1903 | 3.37 | 15000 | 0.1829 | 0.8496 | 0.8485 |
| 0.1846 | 3.94 | 17500 | 0.1783 | 0.8529 | 0.8511 |
| 0.1771 | 4.5 | 20000 | 0.1750 | 0.8548 | 0.8537 |
| 0.1726 | 5.06 | 22500 | 0.1727 | 0.8577 | 0.8564 |
| 0.1673 | 5.62 | 25000 | 0.1683 | 0.8602 | 0.8591 |
| 0.1648 | 6.19 | 27500 | 0.1675 | 0.8608 | 0.8597 |
| 0.1596 | 6.75 | 30000 | 0.1657 | 0.8630 | 0.8620 |
| 0.1563 | 7.31 | 32500 | 0.1635 | 0.8646 | 0.8639 |
| 0.154 | 7.87 | 35000 | 0.1613 | 0.8656 | 0.8647 |
| 0.1496 | 8.43 | 37500 | 0.1611 | 0.8666 | 0.8656 |
| 0.1496 | 9.0 | 40000 | 0.1598 | 0.8676 | 0.8669 |
| 0.1445 | 9.56 | 42500 | 0.1594 | 0.8681 | 0.8671 |
| 0.1435 | 10.12 | 45000 | 0.1588 | 0.8688 | 0.8679 |
| 0.1407 | 10.68 | 47500 | 0.1568 | 0.8703 | 0.8695 |
| 0.1382 | 11.25 | 50000 | 0.1564 | 0.8708 | 0.8700 |
| 0.1372 | 11.81 | 52500 | 0.1550 | 0.8720 | 0.8713 |
| 0.1344 | 12.37 | 55000 | 0.1559 | 0.8718 | 0.8708 |
| 0.1337 | 12.93 | 57500 | 0.1540 | 0.8735 | 0.8729 |
| 0.1303 | 13.5 | 60000 | 0.1541 | 0.8729 | 0.8721 |
| 0.1304 | 14.06 | 62500 | 0.1531 | 0.8735 | 0.8727 |
| 0.1274 | 14.62 | 65000 | 0.1535 | 0.8736 | 0.8727 |
| 0.1266 | 15.18 | 67500 | 0.1527 | 0.8750 | 0.8742 |
| 0.1251 | 15.74 | 70000 | 0.1525 | 0.8755 | 0.8748 |
| 0.1234 | 16.31 | 72500 | 0.1528 | 0.8753 | 0.8745 |
| 0.1229 | 16.87 | 75000 | 0.1516 | 0.8760 | 0.8753 |
| 0.121 | 17.43 | 77500 | 0.1523 | 0.8759 | 0.8752 |
| 0.1212 | 17.99 | 80000 | 0.1515 | 0.8760 | 0.8754 |
| 0.1185 | 18.56 | 82500 | 0.1514 | 0.8765 | 0.8757 |
| 0.1186 | 19.12 | 85000 | 0.1516 | 0.8766 | 0.8760 |
| 0.1172 | 19.68 | 87500 | 0.1506 | 0.8774 | 0.8767 |
| 0.1164 | 20.24 | 90000 | 0.1513 | 0.8770 | 0.8763 |
| 0.116 | 20.81 | 92500 | 0.1507 | 0.8774 | 0.8767 |
| 0.1145 | 21.37 | 95000 | 0.1507 | 0.8777 | 0.8770 |
| 0.1143 | 21.93 | 97500 | 0.1506 | 0.8776 | 0.8770 |
| 0.1131 | 22.49 | 100000 | 0.1507 | 0.8779 | 0.8772 |
| 0.1131 | 23.05 | 102500 | 0.1505 | 0.8779 | 0.8772 |
| 0.1123 | 23.62 | 105000 | 0.1506 | 0.8781 | 0.8774 |
| 0.1117 | 24.18 | 107500 | 0.1504 | 0.8783 | 0.8776 |
| 0.1118 | 24.74 | 110000 | 0.1503 | 0.8784 | 0.8777 |
| 0.1111 | 25.3 | 112500 | 0.1503 | 0.8783 | 0.8776 |
| 0.1111 | 25.87 | 115000 | 0.1502 | 0.8784 | 0.8777 |
| 0.1105 | 26.43 | 117500 | 0.1504 | 0.8783 | 0.8776 |
| 0.1105 | 26.99 | 120000 | 0.1502 | 0.8786 | 0.8779 |
| 0.1104 | 27.55 | 122500 | 0.1503 | 0.8786 | 0.8779 |
| 0.1096 | 28.12 | 125000 | 0.1502 | 0.8785 | 0.8779 |
| 0.1101 | 28.68 | 127500 | 0.1501 | 0.8786 | 0.8779 |
| 0.1101 | 29.24 | 130000 | 0.1502 | 0.8786 | 0.8779 |
| 0.1094 | 29.8 | 132500 | 0.1501 | 0.8786 | 0.8779 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
DOOGLAK/Article_500v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T13:05:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:59:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v9_wikigold_split
type: article500v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.74375
- name: Recall
type: recall
value: 0.7617924528301887
- name: F1
type: f1
value: 0.7526631158455394
- name: Accuracy
type: accuracy
value: 0.9441837337228455
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1931
- Precision: 0.7438
- Recall: 0.7618
- F1: 0.7527
- Accuracy: 0.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 194 | 0.1870 | 0.7335 | 0.7335 | 0.7335 | 0.9401 |
| No log | 2.0 | 388 | 0.1840 | 0.7384 | 0.7561 | 0.7471 | 0.9444 |
| 0.1376 | 3.0 | 582 | 0.1931 | 0.7438 | 0.7618 | 0.7527 | 0.9442 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
ClementRomac/TA_ALP-GMM_SAC_spider_s1
|
ClementRomac
| 2022-08-11T12:59:15Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"region:us"
] |
reinforcement-learning
| 2022-08-11T12:48:21Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'spider'}
```
|
DOOGLAK/Article_500v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T12:58:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:53:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v8_wikigold_split
type: article500v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7349189934505344
- name: Recall
type: recall
value: 0.7560283687943262
- name: F1
type: f1
value: 0.7453242440132843
- name: Accuracy
type: accuracy
value: 0.9421215763172877
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2113
- Precision: 0.7349
- Recall: 0.7560
- F1: 0.7453
- Accuracy: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 191 | 0.1914 | 0.7105 | 0.7181 | 0.7143 | 0.9382 |
| No log | 2.0 | 382 | 0.2045 | 0.7283 | 0.7574 | 0.7426 | 0.9408 |
| 0.1441 | 3.0 | 573 | 0.2113 | 0.7349 | 0.7560 | 0.7453 | 0.9421 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Article_500v5_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T12:40:47Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:35:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v5_wikigold_split
type: article500v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7302452316076294
- name: Recall
type: recall
value: 0.7657142857142857
- name: F1
type: f1
value: 0.7475592747559274
- name: Accuracy
type: accuracy
value: 0.9453822040028936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1848
- Precision: 0.7302
- Recall: 0.7657
- F1: 0.7476
- Accuracy: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 172 | 0.1781 | 0.7013 | 0.7396 | 0.7200 | 0.9403 |
| No log | 2.0 | 344 | 0.1904 | 0.7203 | 0.7421 | 0.7310 | 0.9396 |
| 0.1436 | 3.0 | 516 | 0.1848 | 0.7302 | 0.7657 | 0.7476 | 0.9454 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DOOGLAK/Article_500v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T12:28:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:23:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v3_wikigold_split
type: article500v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7293136626042335
- name: Recall
type: recall
value: 0.7574950033311126
- name: F1
type: f1
value: 0.7431372549019608
- name: Accuracy
type: accuracy
value: 0.9403332402494647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Precision: 0.7293
- Recall: 0.7575
- F1: 0.7431
- Accuracy: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 187 | 0.2080 | 0.6933 | 0.7109 | 0.7020 | 0.9363 |
| No log | 2.0 | 374 | 0.2159 | 0.7244 | 0.7338 | 0.7291 | 0.9379 |
| 0.1349 | 3.0 | 561 | 0.2187 | 0.7293 | 0.7575 | 0.7431 | 0.9403 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Eylul/ppo-LunarLander-v2
|
Eylul
| 2022-08-11T12:25:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:22:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 180.17 +/- 95.47
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harish/t5-e2e-5epochs-lr1e4-alpha0-5-BLANKS
|
harish
| 2022-08-11T12:22:42Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-11T12:22:13Z |
---
license: cc-by-nc-sa-4.0
---
|
yogeshkulkarni/ppo-LunarLander-v2
|
yogeshkulkarni
| 2022-08-11T12:14:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T12:04:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 194.68 +/- 76.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from huggingface_sb3 import load_from_hub
repo_id = "yogeshkulkarni/ppo-LunarLander-v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
# Evaluate this model
eval_env = gym.make("LunarLander-v2")
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
DOOGLAK/Article_500v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T12:11:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v2_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T12:05:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v2_wikigold_split
type: article500v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7113220815752461
- name: Recall
type: recall
value: 0.7526041666666666
- name: F1
type: f1
value: 0.7313810556760665
- name: Accuracy
type: accuracy
value: 0.9410548086866598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Precision: 0.7113
- Recall: 0.7526
- F1: 0.7314
- Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 185 | 0.1795 | 0.6982 | 0.7530 | 0.7245 | 0.9412 |
| No log | 2.0 | 370 | 0.2018 | 0.7218 | 0.7537 | 0.7374 | 0.9403 |
| 0.1342 | 3.0 | 555 | 0.2086 | 0.7113 | 0.7526 | 0.7314 | 0.9411 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
flowers-team/TA_ALP-GMM_SAC_chimpanzee_s18
|
flowers-team
| 2022-08-11T12:07:16Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T10:12:51Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ALP-GMM_SAC_chimpanzee_s18
results:
- metrics:
- type: mean_reward
value: -54.01 +/- 71.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'climbing_profile_chimpanzee'}
```
|
DOOGLAK/Article_500v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T11:58:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article500v0_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T11:53:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v0_wikigold_split
type: article500v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7004528039010798
- name: Recall
type: recall
value: 0.7453669384729429
- name: F1
type: f1
value: 0.7222122463637995
- name: Accuracy
type: accuracy
value: 0.9411139455782312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Precision: 0.7005
- Recall: 0.7454
- F1: 0.7222
- Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 197 | 0.1988 | 0.6828 | 0.7046 | 0.6935 | 0.9347 |
| No log | 2.0 | 394 | 0.2051 | 0.6942 | 0.7454 | 0.7189 | 0.9403 |
| 0.1447 | 3.0 | 591 | 0.2180 | 0.7005 | 0.7454 | 0.7222 | 0.9411 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
flowers-team/TA_ADR_SAC_bipedal_s2
|
flowers-team
| 2022-08-11T11:58:49Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:58:38Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_bipedal_s2
results:
- metrics:
- type: mean_reward
value: 189.10 +/- 122.50
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'old_classic_bipedal'}
```
|
flowers-team/TA_ADR_SAC_bipedal_s1
|
flowers-team
| 2022-08-11T11:58:36Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:58:26Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_bipedal_s1
results:
- metrics:
- type: mean_reward
value: 212.60 +/- 137.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'old_classic_bipedal'}
```
|
harish/t5-e2e-10epochs-lr1e4-alpha0-1PLUSalpha0-9-e10
|
harish
| 2022-08-11T11:57:30Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-11T11:43:36Z |
---
license: cc-by-nc-sa-4.0
---
|
flowers-team/TA_ALP-GMM_SAC_fish_s44
|
flowers-team
| 2022-08-11T11:57:23Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T10:13:44Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ALP-GMM_SAC_fish_s44
results:
- metrics:
- type: mean_reward
value: 268.93 +/- 94.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'fish'}
```
|
flowers-team/TA_ALP-GMM_SAC_bipedal_s12
|
flowers-team
| 2022-08-11T11:56:23Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T10:13:19Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ALP-GMM_SAC_bipedal_s12
results:
- metrics:
- type: mean_reward
value: 229.56 +/- 132.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'old_classic_bipedal'}
```
|
flowers-team/TA_GoalGAN_SAC_chimpanzee_s15
|
flowers-team
| 2022-08-11T11:56:03Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:55:52Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_chimpanzee_s15
results:
- metrics:
- type: mean_reward
value: -48.56 +/- 77.61
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_GoalGAN_SAC_chimpanzee_s2
|
flowers-team
| 2022-08-11T11:55:50Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:55:40Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_chimpanzee_s2
results:
- metrics:
- type: mean_reward
value: -33.19 +/- 80.11
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_GoalGAN_SAC_chimpanzee_s11
|
flowers-team
| 2022-08-11T11:55:21Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:55:07Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_chimpanzee_s11
results:
- metrics:
- type: mean_reward
value: 12.27 +/- 121.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_Random_SAC_chimpanzee_s19
|
flowers-team
| 2022-08-11T11:55:05Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:54:51Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Random_SAC_chimpanzee_s19
results:
- metrics:
- type: mean_reward
value: -58.01 +/- 1.63
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Random'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_Random_SAC_chimpanzee_s24
|
flowers-team
| 2022-08-11T11:54:27Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:54:11Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Random_SAC_chimpanzee_s24
results:
- metrics:
- type: mean_reward
value: -56.22 +/- 10.28
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Random'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_RIAC_SAC_fish_s5
|
flowers-team
| 2022-08-11T11:54:06Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:53:56Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_fish_s5
results:
- metrics:
- type: mean_reward
value: 224.55 +/- 128.93
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'fish'}
```
|
flowers-team/TA_ADR_SAC_fish_s46
|
flowers-team
| 2022-08-11T11:52:56Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:52:39Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_fish_s46
results:
- metrics:
- type: mean_reward
value: -82.48 +/- 54.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'fish'}
```
|
flowers-team/TA_RIAC_SAC_chimpanzee_s10
|
flowers-team
| 2022-08-11T11:52:07Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:51:44Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_chimpanzee_s10
results:
- metrics:
- type: mean_reward
value: -59.06 +/- 4.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'climbing_profile_chimpanzee'}
```
|
DOOGLAK/Article_250v9_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T11:51:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article250v9_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T11:46:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article250v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_250v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article250v9_wikigold_split
type: article250v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6808931599773883
- name: Recall
type: recall
value: 0.6954387990762124
- name: F1
type: f1
value: 0.68808911739503
- name: Accuracy
type: accuracy
value: 0.9338001436339386
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_250v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2025
- Precision: 0.6809
- Recall: 0.6954
- F1: 0.6881
- Accuracy: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 98 | 0.2169 | 0.5997 | 0.6579 | 0.6275 | 0.9256 |
| No log | 2.0 | 196 | 0.2077 | 0.6791 | 0.6804 | 0.6797 | 0.9317 |
| No log | 3.0 | 294 | 0.2025 | 0.6809 | 0.6954 | 0.6881 | 0.9338 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
flowers-team/TA_RIAC_SAC_chimpanzee_s7
|
flowers-team
| 2022-08-11T11:51:28Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:51:17Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_chimpanzee_s7
results:
- metrics:
- type: mean_reward
value: -50.45 +/- 6.05
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_GoalGAN_SAC_fish_s5
|
flowers-team
| 2022-08-11T11:50:37Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:50:26Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_fish_s5
results:
- metrics:
- type: mean_reward
value: 296.27 +/- 72.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'fish'}
```
|
flowers-team/TA_Setter-Solver_SAC_bipedal_s4
|
flowers-team
| 2022-08-11T11:49:02Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:48:51Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Setter-Solver_SAC_bipedal_s4
results:
- metrics:
- type: mean_reward
value: 212.13 +/- 135.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Setter-Solver'
'morphology': 'old_classic_bipedal'}
```
|
flowers-team/TA_Setter-Solver_SAC_bipedal_s10
|
flowers-team
| 2022-08-11T11:48:25Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:48:14Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Setter-Solver_SAC_bipedal_s10
results:
- metrics:
- type: mean_reward
value: 239.05 +/- 138.15
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Setter-Solver'
'morphology': 'old_classic_bipedal'}
```
|
flowers-team/TA_GoalGAN_SAC_bipedal_s2
|
flowers-team
| 2022-08-11T11:48:12Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:48:01Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_bipedal_s2
results:
- metrics:
- type: mean_reward
value: 225.91 +/- 136.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'old_classic_bipedal'}
```
|
flowers-team/TA_ADR_SAC_chimpanzee_s26
|
flowers-team
| 2022-08-11T11:45:40Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:45:29Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_chimpanzee_s26
results:
- metrics:
- type: mean_reward
value: -75.58 +/- 15.05
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'climbing_profile_chimpanzee'}
```
|
DOOGLAK/Article_250v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
| 2022-08-11T11:45:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article250v8_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-11T11:40:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article250v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_250v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article250v8_wikigold_split
type: article250v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.6710306406685237
- name: Recall
type: recall
value: 0.6662057522123894
- name: F1
type: f1
value: 0.6686094920899252
- name: Accuracy
type: accuracy
value: 0.9222875386408554
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_250v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2522
- Precision: 0.6710
- Recall: 0.6662
- F1: 0.6686
- Accuracy: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 100 | 0.2607 | 0.5716 | 0.5575 | 0.5645 | 0.9106 |
| No log | 2.0 | 200 | 0.2498 | 0.6572 | 0.6427 | 0.6499 | 0.9200 |
| No log | 3.0 | 300 | 0.2522 | 0.6710 | 0.6662 | 0.6686 | 0.9223 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
flowers-team/TA_ADR_SAC_chimpanzee_s24
|
flowers-team
| 2022-08-11T11:45:27Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:45:16Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_chimpanzee_s24
results:
- metrics:
- type: mean_reward
value: -68.95 +/- 24.51
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_ADR_SAC_chimpanzee_s20
|
flowers-team
| 2022-08-11T11:45:05Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:44:51Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ADR_SAC_chimpanzee_s20
results:
- metrics:
- type: mean_reward
value: -57.14 +/- 2.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ADR'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_Self-Paced_SAC_fish_s11
|
flowers-team
| 2022-08-11T11:44:28Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:44:17Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Self-Paced_SAC_fish_s11
results:
- metrics:
- type: mean_reward
value: -61.40 +/- 66.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Self-Paced'
'morphology': 'fish'}
```
|
flowers-team/TA_Self-Paced_SAC_fish_s5
|
flowers-team
| 2022-08-11T11:44:15Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:43:45Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Self-Paced_SAC_fish_s5
results:
- metrics:
- type: mean_reward
value: 193.28 +/- 140.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Self-Paced'
'morphology': 'fish'}
```
|
flowers-team/TA_Self-Paced_SAC_fish_s13
|
flowers-team
| 2022-08-11T11:43:36Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:43:23Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Self-Paced_SAC_fish_s13
results:
- metrics:
- type: mean_reward
value: 279.71 +/- 107.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Self-Paced'
'morphology': 'fish'}
```
|
flowers-team/TA_Self-Paced_SAC_chimpanzee_s10
|
flowers-team
| 2022-08-11T11:42:59Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:42:48Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Self-Paced_SAC_chimpanzee_s10
results:
- metrics:
- type: mean_reward
value: -70.90 +/- 5.24
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Self-Paced'
'morphology': 'climbing_profile_chimpanzee'}
```
|
flowers-team/TA_RIAC_SAC_bipedal_s13
|
flowers-team
| 2022-08-11T11:42:18Z | 0 | 0 | null |
[
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-11T11:42:08Z |
---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_bipedal_s13
results:
- metrics:
- type: mean_reward
value: 184.93 +/- 139.14
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'old_classic_bipedal'}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.