modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513
|
deepesh0x
| 2022-06-24T17:25:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:deepesh0x/autotrain-data-bert_wikipedia_sst_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-24T17:17:14Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-bert_wikipedia_sst_2
co2_eq_emissions: 16.686945384446037
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034235513
- CO2 Emissions (in grams): 16.686945384446037
## Validation Metrics
- Loss: 0.14450643956661224
- Accuracy: 0.9527839643652561
- Precision: 0.9565852363250132
- Recall: 0.9588767633750332
- AUC: 0.9872179498202862
- F1: 0.9577296291373122
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
gballoccu/q-FrozenLake-v1-4x4-Slippery
|
gballoccu
| 2022-06-24T17:18:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T16:58:42Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.81 +/- 0.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gballoccu/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
gballoccu/q-FrozenLake-v1-4x4-noSlippery
|
gballoccu
| 2022-06-24T17:01:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T17:01:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gballoccu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
philschmid/DistilBERT-Banking77
|
philschmid
| 2022-06-24T14:31:49Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:banking77",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-02T10:38:18Z |
---
tags: autotrain
language: en
widget:
- text: I am still waiting on my card?
datasets:
- banking77
model-index:
- name: BERT-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: BANKING77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 91.99
- name: Macro F1
type: macro-f1
value: 91.99
- name: Weighted F1
type: weighted-f1
value: 91.99
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.922077922077922
verified: true
- name: Precision Macro
type: precision
value: 0.9256326708783564
verified: true
- name: Precision Micro
type: precision
value: 0.922077922077922
verified: true
- name: Precision Weighted
type: precision
value: 0.9256326708783565
verified: true
- name: Recall Macro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Micro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Weighted
type: recall
value: 0.922077922077922
verified: true
- name: F1 Macro
type: f1
value: 0.9221617304411865
verified: true
- name: F1 Micro
type: f1
value: 0.922077922077922
verified: true
- name: F1 Weighted
type: f1
value: 0.9221617304411867
verified: true
- name: loss
type: loss
value: 0.31692808866500854
verified: true
co2_eq_emissions: 5.632805352029529
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 940131045
- CO2 Emissions (in grams): 5.632805352029529
## Validation Metrics
- Loss: 0.3392622470855713
- Accuracy: 0.9199410609037328
- Macro F1: 0.9199390885956755
- Micro F1: 0.9199410609037327
- Weighted F1: 0.9198140295005729
- Macro Precision: 0.9235531521509113
- Micro Precision: 0.9199410609037328
- Weighted Precision: 0.9228777883152248
- Macro Recall: 0.919570805773292
- Micro Recall: 0.9199410609037328
- Weighted Recall: 0.9199410609037328
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/philschmid/autotrain-does-it-work-940131045
```
Or Python API:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/DistilBERT-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
```
|
Corianas/ppo-QbertNoFrameskip-v4_3.load-best
|
Corianas
| 2022-06-24T13:56:07Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T13:55:03Z |
---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 16115.00 +/- 3313.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
philschmid/habana-xlm-r-large-amazon-massive
|
philschmid
| 2022-06-24T13:38:20Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"optimum_habana",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"habana",
"dataset:AmazonScience/massive",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-20T14:16:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- habana
datasets:
- AmazonScience/massive
metrics:
- accuracy
- f1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid/habana-xlm-r-large-amazon-massive
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset.
It achieves the following results on the evaluation set:
## 8x HPU approx. 41min
**train results**
```bash
{'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0}
{'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0}
{'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0}
{'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0}
{'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0}
```
total
```bash
{'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0}
```
**eval results**
```bash
{'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0}
{'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0}
{'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140
608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0}
{'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0}
{'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0}
```
# Environment
The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`.
see for more information: https://github.com/philschmid/deep-learning-habana-huggingface
|
Corianas/ppo-QbertNoFrameskip-v4_3
|
Corianas
| 2022-06-24T13:34:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T13:33:03Z |
---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 16147.50 +/- 1760.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
joefarrington/dqn-SpaceInvadersNoFrameskip-v4
|
joefarrington
| 2022-06-24T12:49:24Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T13:10:22Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 977.00 +/- 313.93
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga joefarrington -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga joefarrington
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dnouri-kipoi/pyt
|
dnouri-kipoi
| 2022-06-24T12:47:13Z | 0 | 0 | null |
[
"kipoi",
"region:us"
] | null | 2022-06-24T11:20:03Z |
---
tags:
- kipoi
---
Simple testing model for Kipoi/pytorch by Roman Kreuzhuber
|
amorfati/mt5-small-finetuned-amazon-en-es
|
amorfati
| 2022-06-24T12:17:04Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-24T10:12:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amorfati/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amorfati/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7070
- Validation Loss: 2.5179
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 200000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.7070 | 2.5179 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ashraq/movielense_user_model_cos_384
|
ashraq
| 2022-06-24T11:32:28Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-24T11:32:14Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
codeparrot/codeparrot
|
codeparrot
| 2022-06-24T08:28:28Z | 2,327 | 104 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"code",
"generation",
"dataset:codeparrot/codeparrot-clean-train",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: code
tags:
- code
- gpt2
- generation
datasets:
- codeparrot/codeparrot-clean-train
widget:
- text: "from transformer import"
example_title: "Transformers"
- text: "def print_hello_world():\n\t"
example_title: "Hello World!"
- text: "def get_file_size(filepath):"
example_title: "File size"
- text: "import numpy as"
example_title: "Numpy"
model-index:
- name: codeparrot
results:
- task:
name: Code Generation
type: code-generation
dataset:
name: "HumanEval"
type: openai_humaneval
metrics:
- name: pass@1
type: code_eval
value: 3.99
- name: pass@10
type: code_eval
value: 8.69
- name: pass@100
type: code_eval
value: 17.88
---
# CodeParrot 🦜
CodeParrot 🦜 is a GPT-2 model (1.5B parameters) trained to generate Python code. After the initial training and release of v1.0 we trained the model some more and released v1.1 (see below for details).
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot")
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="codeparrot/codeparrot")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/codeparrot/codeparrot-clean) in two steps. After the initial training (v1.0) the model was trained for another 30k steps resulting in v1.1 and you find the settings in the following table:
|Config| v1.0| v1.1|
|------|------------------|--------------------|
|Batch size| 512 | 512 |
|Context size| 1024 | 1024 |
|Training steps| 50'000| 30'000
|Gradient accumulation| 16| 16 |
|Gradient checkpointing| True| True |
|Learning rate| 2e-4 | 5e-5 |
|Weight decay | 0.1 | 0.1 |
|Warmup steps| 750 | 750 |
|Schedule| Cosine | Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 26 + 15 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | v1.0 | v1.1 |
|--------|-----|-----|
|pass@1 | 3.58% | 3.99% |
|pass@10 | 8.03% | 8.69% |
|pass@100 | 14.96% | 17.88% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/codeparrot/codeparrot-clean), [train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train), [valid](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](), [highlighting]()
|
humhealth/chroniccaremanagement
|
humhealth
| 2022-06-24T08:14:42Z | 0 | 0 | null |
[
"license:bsl-1.0",
"region:us"
] | null | 2022-06-24T08:14:24Z |
---
license: bsl-1.0
---
https://www.humhealth.com/chronic-care-management/
|
humhealth/remote-patientmonitoring
|
humhealth
| 2022-06-24T08:08:34Z | 0 | 1 | null |
[
"license:bsl-1.0",
"region:us"
] | null | 2022-06-24T08:07:20Z |
---
license: bsl-1.0
---
https://www.humhealth.com/remote-patient-monitoring/
https://www.humhealth.com/chronic-care-management/
|
wiselinjayajos/t5-end2end-questions-generation
|
wiselinjayajos
| 2022-06-24T08:04:22Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wiselinjayajos/squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-22T17:26:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiselinjayajos/squad_modified_for_t5_qg
widget:
- text: "generate question: Python is developed by Guido Van Rossum and released in 1991.</s>"
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5879 | 0.34 | 100 | 1.9133 |
| 1.9688 | 0.68 | 200 | 1.7313 |
| 1.8513 | 1.02 | 300 | 1.6691 |
| 1.7459 | 1.36 | 400 | 1.6413 |
| 1.7206 | 1.69 | 500 | 1.6200 |
| 1.7026 | 2.03 | 600 | 1.6101 |
| 1.6447 | 2.37 | 700 | 1.5983 |
| 1.6402 | 2.71 | 800 | 1.5979 |
| 1.6332 | 3.05 | 900 | 1.5924 |
| 1.5953 | 3.39 | 1000 | 1.5877 |
| 1.5922 | 3.73 | 1100 | 1.5854 |
| 1.5832 | 4.07 | 1200 | 1.5830 |
| 1.5726 | 4.41 | 1300 | 1.5799 |
| 1.5587 | 4.75 | 1400 | 1.5789 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jwang/tuned-t5
|
jwang
| 2022-06-24T06:18:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-24T06:16:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jwang/tuned-t5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jwang/tuned-t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6386
- Validation Loss: 3.3773
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.7547 | 3.4438 | 0 |
| 4.6135 | 3.4096 | 1 |
| 4.6386 | 3.3773 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
|
gary109
| 2022-06-24T05:43:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T07:53:29Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0763
- Wer: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1632 | 1.0 | 150 | 1.2007 | 0.9875 |
| 1.1615 | 2.0 | 300 | 1.1912 | 0.9875 |
| 1.1487 | 3.0 | 450 | 1.1942 | 0.9875 |
| 1.1207 | 4.0 | 600 | 1.1753 | 0.9875 |
| 1.0638 | 5.0 | 750 | 1.1345 | 0.8214 |
| 1.0174 | 6.0 | 900 | 1.1541 | 0.7665 |
| 0.9946 | 7.0 | 1050 | 1.0799 | 0.7716 |
| 0.9694 | 8.0 | 1200 | 1.0848 | 0.7418 |
| 0.9566 | 9.0 | 1350 | 1.0763 | 0.7344 |
| 0.9466 | 10.0 | 1500 | 1.0791 | 0.7240 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Rahulrr/language_model_en_he
|
Rahulrr
| 2022-06-24T05:31:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-24T05:28:35Z |
---
language:
- en
- he
tags:
- translation
license: apache-2.0
---
### en-he
* source group: English
* target group: Hebrew
* OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.zip)
* test set translations: [opus+bt-2021-04-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.test.txt)
* test set scores: [opus+bt-2021-04-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-heb | 37.8 | 0.601 | 10000 | 60359 | 1.000 |
### System Info:
- hf_name: en-he
- source_languages: eng
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'he']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.test.txt
- src_alpha3: eng
- tgt_alpha3: heb
- chrF2_score: 0.601
- bleu: 37.8
- src_name: English
- tgt_name: Hebrew
- train_date: 2021-04-13 00:00:00
- src_alpha2: en
- tgt_alpha2: he
- prefer_old: False
- short_pair: en-he
- helsinki_git_sha: c4e978d8de47875b482653b423dcfe968979d7d5
- transformers_git_sha: 56b83cf049823ed074a655eceb28f31e2077c6eb
- port_machine: LAPIN4GLQ2G3
- port_time: 2022-06-22-19:47
|
iaanimashaun/distilgpt2-finetuned-wikitext2
|
iaanimashaun
| 2022-06-24T05:13:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T10:57:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7852 | 1.0 | 2334 | 3.6895 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sharpcoder/wav2vec2_bjorn
|
sharpcoder
| 2022-06-24T04:24:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T02:53:37Z |
This project is meant to fine-tune the facebook/wav2vec2 speech-to-text library using my voice specifically for my own speech to text purposes.
|
sonalily/distilgpt2-finetuned-wikitext2
|
sonalily
| 2022-06-24T04:14:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T01:12:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7607 | 1.0 | 2334 | 3.6664 |
| 3.6527 | 2.0 | 4668 | 3.6473 |
| 3.6015 | 3.0 | 7002 | 3.6429 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sijunhe/nezha-cn-base
|
sijunhe
| 2022-06-24T03:53:56Z | 1,269 | 11 |
transformers
|
[
"transformers",
"pytorch",
"nezha",
"fill-mask",
"arxiv:1909.00204",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-18T16:39:15Z |
---
license: afl-3.0
---
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes**
[NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch)
## Example Usage
```
from transformers import BertTokenizer, NezhaModel
tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-base')
model = NezhaModel.from_pretrained("sijunhe/nezha-cn-base")
text = "我爱北京天安门"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
|
ferzimo/dummy-model
|
ferzimo
| 2022-06-24T03:41:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T03:36:09Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.7.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mwong/albert-base-climate-claim-related
|
mwong
| 2022-06-24T03:35:34Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-claim-related",
"dataset:mwong/climate-claim-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:56:10Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-claim-related
- mwong/climate-claim-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# ClimateAlbert
ClimateAlbert is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 85.33% with test dataset "mwong/climate-claim-related". Using pretrained albert-base-v2 model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
|
mwong/albert-base-fever-claim-related
|
mwong
| 2022-06-24T03:34:53Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-claim-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:49:48Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-claim-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverAlbert
FeverAlbert is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 88.33% with test dataset "mwong/fever-claim-related". Using pretrained albert-base-v2 model, the classifier head is trained on Fever dataset.
|
mwong/roberta-base-climate-evidence-related
|
mwong
| 2022-06-24T03:34:04Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"dataset:mwong/climate-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:52:55Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
- mwong/climate-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# ClimateRoberta
ClimateRoberta is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 80.13% with test dataset "mwong/climate-evidence-related". Using pretrained roberta-base model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
|
mwong/climatebert-base-f-climate-evidence-related
|
mwong
| 2022-06-24T03:32:39Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"dataset:mwong/climate-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:58:32Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
- mwong/climate-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# ClimateBert-related
ClimateBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 81.90% with test dataset "mwong/climate-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
|
eugenetanjc/wav2vec2-base-timit-demo-google-colab
|
eugenetanjc
| 2022-06-24T02:12:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T09:33:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
TencentGameMate/chinese-wav2vec2-large
|
TencentGameMate
| 2022-06-24T02:11:54Z | 702 | 17 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-02T06:20:03Z |
---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from fairseq import checkpoint_utils
from transformers import (
Wav2Vec2FeatureExtractor,
Wav2Vec2ForPreTraining,
Wav2Vec2Model,
)
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
model_path=""
wav_path=""
mask_prob=0.0
mask_length=10
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = Wav2Vec2Model.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
# for Wav2Vec2ForPreTraining
# batch_size, raw_sequence_length = input_values.shape
# sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
# mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2)
# mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
# for Wav2Vec2ForPreTraining
# outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True)
# last_hidden_state = outputs.hidden_states[-1]
```
|
TencentGameMate/chinese-wav2vec2-base
|
TencentGameMate
| 2022-06-24T01:53:18Z | 625 | 24 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-02T06:17:07Z |
---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from fairseq import checkpoint_utils
from transformers import (
Wav2Vec2FeatureExtractor,
Wav2Vec2ForPreTraining,
Wav2Vec2Model,
)
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
model_path=""
wav_path=""
mask_prob=0.0
mask_length=10
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = Wav2Vec2Model.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
# for Wav2Vec2ForPreTraining
# batch_size, raw_sequence_length = input_values.shape
# sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
# mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2)
# mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
# for Wav2Vec2ForPreTraining
# outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True)
# last_hidden_state = outputs.hidden_states[-1]
```
|
TencentGameMate/chinese-hubert-base
|
TencentGameMate
| 2022-06-24T01:52:57Z | 1,878 | 36 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-02T06:21:23Z |
---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from transformers import (
Wav2Vec2FeatureExtractor,
HubertModel,
)
model_path=""
wav_path=""
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = HubertModel.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
```
|
rcanand/dqn-SpaceInvadersNoFrameskip-v4
|
rcanand
| 2022-06-24T01:37:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T01:37:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 204.00 +/- 149.11
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rcanand -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rcanand
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
tuhina13/q-Taxi-v3
|
tuhina13
| 2022-06-23T23:36:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T23:36:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.44 +/- 2.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tuhina13/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
tuhina13/q-FrozenLake-v1-4x4-noSlippery
|
tuhina13
| 2022-06-23T23:29:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T23:29:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tuhina13/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
By43532/Dog
|
By43532
| 2022-06-23T20:21:02Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-23T20:21:02Z |
---
license: bigscience-bloom-rail-1.0
---
|
ArthurZ/opt-66000m
|
ArthurZ
| 2022-06-23T16:24:11Z | 0 | 0 | null |
[
"opt_metasq",
"region:us"
] | null | 2022-06-23T16:20:29Z |
---
tags:
- opt_metasq
---
# This repo let's you run the following checkpoint using facebookresearch/metaseq.
Do the following:
## 1. Install PyTorch
```
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
## 2. Install Megatron
```
git clone https://github.com/patrickvonplaten/Megatron-LM.git
cd Megatron-LM
pip3 install six regex
pip3 install -e .
```
## 3. Install fairscale
```
git clone https://github.com/facebookresearch/fairscale.git
cd fairscale
git checkout prefetch_fsdp_params_simple
pip3 install -e .
```
## 4. Install metaseq
```
git clone https://github.com/patrickvonplaten/metaseq.git
cd metaseq
pip3 install -e .
```
## 5. Clone this repo (click top right on "How to clone")
## 6. Run the following:
```bash
cd <path/to/cloned/repo>
bash run.sh
```
|
THUDM/CogView2
|
THUDM
| 2022-06-23T15:36:19Z | 0 | 7 | null |
[
"arxiv:2204.14217",
"license:apache-2.0",
"region:us"
] | null | 2022-06-22T11:23:42Z |
---
license: apache-2.0
---
# CogView2
## Model description
**CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers**
- [Paper](https://arxiv.org/abs/2204.14217)
- [GitHub Repo](https://github.com/THUDM/CogView2)
### Abstract
The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
## BibTeX entry and citation info
```bibtex
@article{ding2022cogview2,
title={CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers},
author={Ding, Ming and Zheng, Wendi and Hong, Wenyi and Tang, Jie},
journal={arXiv preprint arXiv:2204.14217},
year={2022}
}
```
|
404E/autotrain-formality-1026434913
|
404E
| 2022-06-23T15:19:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:404E/autotrain-data-formality",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T15:15:53Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- 404E/autotrain-data-formality
co2_eq_emissions: 7.300283563922049
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1026434913
- CO2 Emissions (in grams): 7.300283563922049
## Validation Metrics
- Loss: 0.5467672348022461
- MSE: 0.5467672944068909
- MAE: 0.5851736068725586
- R2: 0.6883510493648173
- RMSE: 0.7394371628761292
- Explained Variance: 0.6885714530944824
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/404E/autotrain-formality-1026434913
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
gastronomia-para-to2/gastronomia_para_to2
|
gastronomia-para-to2
| 2022-06-23T14:55:10Z | 33 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"recipe-generation",
"es",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-29T06:26:01Z |
---
language:
- es
tags:
- generated_from_trainer
- recipe-generation
widget:
- text: "<RECIPE_START> <INPUT_START> salmón <NEXT_INPUT> zumo de naranja <NEXT_INPUT> aceite de oliva <NEXT_INPUT> sal <NEXT_INPUT> pimienta <INPUT_END> <INGR_START>"
- text: "<RECIPE_START> <INPUT_START> harina <NEXT_INPUT> azúcar <NEXT_INPUT> huevos <NEXT_INPUT> chocolate <NEXT_INPUT> levadura Royal <INPUT_END> <INGR_START>"
inference:
parameters:
top_k: 50
top_p: 0.92
do_sample: True
num_return_sequences: 3
max_new_tokens: 100
---
# Model description
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on a custom dataset (not publicly available). The dataset is made of crawled data from 3 Spanish cooking websites and it contains approximately ~50000 recipes.
It achieves the following results on the evaluation set:
- Loss: 0.5796
## Contributors
- Julián Cendrero ([jucendrero](https://huggingface.co/jucendrero))
- Silvia Duque ([silBERTa](https://huggingface.co/silBERTa))
## How to use it
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_checkpoint = 'gastronomia-para-to2/gastronomia_para_to2'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForCausalLM.from_pretrained(model_checkpoint)
```
The tokenizer makes use of the following special tokens to indicate the structure of the recipe:
```python
special_tokens = [
'<INPUT_START>',
'<NEXT_INPUT>',
'<INPUT_END>',
'<TITLE_START>',
'<TITLE_END>',
'<INGR_START>',
'<NEXT_INGR>',
'<INGR_END>',
'<INSTR_START>',
'<NEXT_INSTR>',
'<INSTR_END>',
'<RECIPE_START>',
'<RECIPE_END>']
```
The input should be of the form:
```python
<RECIPE_START> <INPUT_START> ingredient_1 <NEXT_INPUT> ingredient_2 <NEXT_INPUT> ... <NEXT_INPUT> ingredient_n <INPUT_END> <INGR_START>
```
We are using the following configuration to generate recipes, but feel free to change parameters as needed:
```python
tokenized_input = tokenizer(input, return_tensors='pt')
output = model.generate(**tokenized_input,
max_length=600,
do_sample=True,
top_p=0.92,
top_k=50,
num_return_sequences=3)
pre_output = tokenizer.decode(output[0], skip_special_tokens=False)
```
The recipe ends where the \<RECIPE_END\> special token appears for the first time.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6213 | 1.0 | 5897 | 0.6214 |
| 0.5905 | 2.0 | 11794 | 0.5995 |
| 0.5777 | 3.0 | 17691 | 0.5893 |
| 0.574 | 4.0 | 23588 | 0.5837 |
| 0.5553 | 5.0 | 29485 | 0.5807 |
| 0.5647 | 6.0 | 35382 | 0.5796 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
## References
The list of special tokens used for generation recipe structure has been taken from:
[RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://www.aclweb.org/anthology/2020.inlg-1.4.pdf).
|
WindowsRegedit/zuowen
|
WindowsRegedit
| 2022-06-23T12:47:18Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T08:46:56Z |
### 作文模型
使用方法,请参考[Python 自动写作文库](https://github.com/WindowsRegedit/zuowen)
|
transZ/M2M_Vi_Ba
|
transZ
| 2022-06-23T11:01:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"vi",
"ba",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-22T15:26:10Z |
---
language:
- vi
- ba
tags:
- translation
datasets:
- custom dataset
metrics:
- bleu
- sacrebleu
---
# How to run the model
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer = M2M100Tokenizer.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer.src_lang = "vi"
vi_text = "Hôm nay ba đi chợ."
encoded_vi = tokenizer(vi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_vi, forced_bos_token_id=tokenizer.get_lang_id("ba"))
translate = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
print(translate)
```
|
aico/TrOCR-MNIST
|
aico
| 2022-06-23T10:38:57Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-06-23T06:47:08Z |
Fine Tune MNIST dataset on the ViT TrOCR model
accuracy = 0.99525
ref:
http://yann.lecun.com/exdb/mnist/
https://github.com/microsoft/unilm/tree/master/trocr
|
Homayoon83/Carball
|
Homayoon83
| 2022-06-23T09:55:39Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-23T09:55:38Z |
---
license: bigscience-bloom-rail-1.0
---
|
Saraswati/TEST2ppo-LunarLander-v2
|
Saraswati
| 2022-06-23T08:54:21Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T11:28:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 195.82 +/- 82.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="Saraswati/TEST2ppo-LunarLander-v2",
filename="{MODEL FILENAME}.zip",
)
...
```
|
cjbarrie/autotrain-atc2
|
cjbarrie
| 2022-06-23T08:01:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:cjbarrie/autotrain-data-traintest-sentiment-split",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T07:59:46Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- cjbarrie/autotrain-data-traintest-sentiment-split
co2_eq_emissions: 3.1566482249518177
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1024534825
- CO2 Emissions (in grams): 3.1566482249518177
## Validation Metrics
- Loss: 0.5167999267578125
- Accuracy: 0.7523809523809524
- Precision: 0.7377049180327869
- Recall: 0.5555555555555556
- AUC: 0.8142525600535937
- F1: 0.6338028169014086
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-traintest-sentiment-split-1024534825
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534825", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534825", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
cjbarrie/autotrain-atc
|
cjbarrie
| 2022-06-23T08:00:44Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:cjbarrie/autotrain-data-traintest-sentiment-split",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T07:59:29Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- cjbarrie/autotrain-data-traintest-sentiment-split
co2_eq_emissions: 2.288443953210163
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1024534822
- CO2 Emissions (in grams): 2.288443953210163
## Validation Metrics
- Loss: 0.5510443449020386
- Accuracy: 0.7619047619047619
- Precision: 0.6761363636363636
- Recall: 0.7345679012345679
- AUC: 0.7936883912336109
- F1: 0.7041420118343196
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-traintest-sentiment-split-1024534822
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534822", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534822", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
MRF18/results
|
MRF18
| 2022-06-23T07:18:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T04:42:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [MRF18/results](https://huggingface.co/MRF18/results) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kktoto/4L_weight_decay
|
kktoto
| 2022-06-23T04:49:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-23T03:17:44Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 4L_weight_decay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4L_weight_decay
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1312
- Precision: 0.7006
- Recall: 0.6863
- F1: 0.6934
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.157 | 1.0 | 5561 | 0.1464 | 0.6943 | 0.6153 | 0.6524 | 0.9465 |
| 0.1454 | 2.0 | 11122 | 0.1396 | 0.6921 | 0.6491 | 0.6699 | 0.9486 |
| 0.1414 | 3.0 | 16683 | 0.1372 | 0.6841 | 0.6746 | 0.6793 | 0.9492 |
| 0.1335 | 4.0 | 22244 | 0.1339 | 0.6997 | 0.6617 | 0.6802 | 0.9505 |
| 0.1308 | 5.0 | 27805 | 0.1339 | 0.6963 | 0.6763 | 0.6862 | 0.9510 |
| 0.1285 | 6.0 | 33366 | 0.1320 | 0.7102 | 0.6639 | 0.6863 | 0.9519 |
| 0.1257 | 7.0 | 38927 | 0.1306 | 0.7031 | 0.6771 | 0.6898 | 0.9521 |
| 0.1222 | 8.0 | 44488 | 0.1324 | 0.7005 | 0.6836 | 0.6919 | 0.9522 |
| 0.1207 | 9.0 | 50049 | 0.1313 | 0.7017 | 0.6832 | 0.6923 | 0.9524 |
| 0.1195 | 10.0 | 55610 | 0.1312 | 0.7006 | 0.6863 | 0.6934 | 0.9524 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huawei-noah/SPIRAL-base-MCT
|
huawei-noah
| 2022-06-23T03:29:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-17T09:19:20Z |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
========
This is the pretrained model of **SPIRAL Base with Multi-Condition Training**, trained with 960-hour LibriSpeech data, and noise dataset from [ICASSP 2021 DNS Challenge](https://github.com/microsoft/DNS-Challenge/tree/icassp2021-final) for noise robustness.
Citation
========
If you find SPIRAL useful in your research, please cite the following paper:
```
@inproceedings{huang2022spiral,
title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBpg4PnXhYH}
}
```
|
huawei-noah/SPIRAL-Large
|
huawei-noah
| 2022-06-23T03:29:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-17T09:17:20Z |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
========
This is the pretrained model of **SPIRAL LARGE**, trained with 60k-hour LibriLight data
Citation
========
If you find SPIRAL useful in your research, please cite the following paper:
```
@inproceedings{huang2022spiral,
title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBpg4PnXhYH}
}
```
|
huawei-noah/SPIRAL-base
|
huawei-noah
| 2022-06-23T03:26:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-14T09:40:29Z |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
========
This is the pretrained model of **SPIRAL Base**, trained with 960-hour LibriSpeech data
Citation
========
If you find SPIRAL useful in your research, please cite the following paper:
```
@inproceedings{huang2022spiral,
title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBpg4PnXhYH}
}
```
|
martin-ha/text_image_dual_encoder
|
martin-ha
| 2022-06-23T03:23:43Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-20T16:19:19Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay': 0.001, 'exclude_from_weight_decay': None}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735
|
justpyschitry
| 2022-06-23T02:24:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:justpyschitry/autotrain-data-Wikipeida_Article_Classifier_by_Chap",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T02:16:40Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- justpyschitry/autotrain-data-Wikipeida_Article_Classifier_by_Chap
co2_eq_emissions: 16.816741650923202
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1022634735
- CO2 Emissions (in grams): 16.816741650923202
## Validation Metrics
- Loss: 0.4373569190502167
- Accuracy: 0.9027552674230146
- Macro F1: 0.8938134766263609
- Micro F1: 0.9027552674230146
- Weighted F1: 0.9023653852553881
- Macro Precision: 0.8970541297231431
- Micro Precision: 0.9027552674230146
- Weighted Precision: 0.903514305510645
- Macro Recall: 0.892665778987219
- Micro Recall: 0.9027552674230146
- Weighted Recall: 0.9027552674230146
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
BigSalmon/InformalToFormalLincoln52
|
BigSalmon
| 2022-06-23T02:02:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T01:38:07Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln52")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln52")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
|
akraut/test_model
|
akraut
| 2022-06-23T01:04:38Z | 0 | 0 | null |
[
"image-classification",
"license:afl-3.0",
"region:us"
] |
image-classification
| 2022-06-22T21:56:23Z |
---
tags:
- image-classification
license: afl-3.0
---
|
Popppoogtcdcr/H
|
Popppoogtcdcr
| 2022-06-23T00:33:17Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-06-23T00:33:17Z |
---
license: cc-by-nc-sa-4.0
---
|
tals/albert-base-vitaminc_flagging
|
tals
| 2022-06-22T23:56:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
JMillan/q-Taxi-v3
|
JMillan
| 2022-06-22T22:35:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T22:35:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JMillan/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Dugerij/q-Taxi-v3
|
Dugerij
| 2022-06-22T21:43:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T21:43:47Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dugerij/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Dugerij/q-FrozenLake-v1-4x4-noSlippery
|
Dugerij
| 2022-06-22T21:37:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T21:37:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Dugerij/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
mmillet/xlmroberta-2nd-finetune-epru
|
mmillet
| 2022-06-22T19:14:28Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T18:38:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlmroberta-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/xlm-roberta-base_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/xlm-roberta-base_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3666
- Accuracy: 0.9325
- F1: 0.9329
- Precision: 0.9352
- Recall: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4757 | 1.0 | 12 | 0.2387 | 0.9264 | 0.9267 | 0.9333 | 0.9264 |
| 0.3086 | 2.0 | 24 | 0.3059 | 0.9141 | 0.9143 | 0.9270 | 0.9141 |
| 0.2151 | 3.0 | 36 | 0.2394 | 0.9202 | 0.9214 | 0.9266 | 0.9202 |
| 0.1629 | 4.0 | 48 | 0.3025 | 0.9325 | 0.9332 | 0.9385 | 0.9325 |
| 0.0911 | 5.0 | 60 | 0.2597 | 0.9387 | 0.9390 | 0.9434 | 0.9387 |
| 0.0455 | 6.0 | 72 | 0.3476 | 0.9387 | 0.9389 | 0.9400 | 0.9387 |
| 0.0521 | 7.0 | 84 | 0.3630 | 0.9325 | 0.9329 | 0.9356 | 0.9325 |
| 0.029 | 8.0 | 96 | 0.3100 | 0.9509 | 0.9513 | 0.9531 | 0.9509 |
| 0.0379 | 9.0 | 108 | 0.3044 | 0.9448 | 0.9450 | 0.9455 | 0.9448 |
| 0.0363 | 10.0 | 120 | 0.4181 | 0.9141 | 0.9147 | 0.9191 | 0.9141 |
| 0.0165 | 11.0 | 132 | 0.3666 | 0.9325 | 0.9329 | 0.9352 | 0.9325 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mmillet/xlm-roberta-base_single_finetuned_on_cedr_augmented
|
mmillet
| 2022-06-22T18:01:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T17:23:58Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base_single_finetuned_on_cedr_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_single_finetuned_on_cedr_augmented
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4650
- Accuracy: 0.8820
- F1: 0.8814
- Precision: 0.8871
- Recall: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8868 | 1.0 | 69 | 0.4939 | 0.8403 | 0.8376 | 0.8431 | 0.8403 |
| 0.4248 | 2.0 | 138 | 0.3969 | 0.8779 | 0.8768 | 0.8798 | 0.8779 |
| 0.3197 | 3.0 | 207 | 0.4019 | 0.8758 | 0.8757 | 0.8758 | 0.8758 |
| 0.2737 | 4.0 | 276 | 0.3915 | 0.8831 | 0.8827 | 0.8847 | 0.8831 |
| 0.2053 | 5.0 | 345 | 0.4445 | 0.8643 | 0.8650 | 0.8714 | 0.8643 |
| 0.1705 | 6.0 | 414 | 0.4650 | 0.8820 | 0.8814 | 0.8871 | 0.8820 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
atendstowards0/codeparrot-ds
|
atendstowards0
| 2022-06-22T17:56:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-22T17:45:09Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
jamesmarcel/xlm-roberta-base-finetuned-panx-de
|
jamesmarcel
| 2022-06-22T17:26:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-22T17:03:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mayoughi/where_am_I_hospital-balcony-hallway-airport-coffee-house
|
mayoughi
| 2022-06-22T16:00:57Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-22T16:00:45Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: where_am_I_hospital-balcony-hallway-airport-coffee-house
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8839285969734192
---
# where_am_I_hospital-balcony-hallway-airport-coffee-house
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### airport

#### balcony

#### coffee house indoors

#### hallway

#### hospital

|
mmazuecos/q-FrozenLake-v1-4x4-noSlippery
|
mmazuecos
| 2022-06-22T15:57:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T15:57:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mmazuecos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
abhishek/autotrain-dog-vs-food
|
abhishek
| 2022-06-22T14:51:28Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"dataset:abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934",
"dataset:sasha/dog-food",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-22T10:33:54Z |
---
tags: autotrain
datasets:
- abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934
- sasha/dog-food
co2_eq_emissions: 2.050948967287266
model-index:
- name: autotrain-dog-vs-food
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
metrics:
- name: Accuracy
type: accuracy
value: 0.9976190476190476
- task:
type: image-classification
name: Image Classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
config: sasha--dog-food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 1.0
verified: true
- name: Precision
type: precision
value: 1.0
verified: true
- name: Recall
type: recall
value: 1.0
verified: true
- name: AUC
type: auc
value: 1.0
verified: true
- name: F1
type: f1
value: 1.0
verified: true
- name: loss
type: loss
value: 0.001115015591494739
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 264300
- CO2 Emissions (in grams): 2.050948967287266
## Validation Metrics
- Loss: 0.009216072037816048
- Accuracy: 0.9976190476190476
- Macro F1: 0.9973261861865685
- Micro F1: 0.9976190476190476
- Weighted F1: 0.997621154535828
- Macro Precision: 0.9964539007092199
- Micro Precision: 0.9976190476190476
- Weighted Precision: 0.9976359338061465
- Macro Recall: 0.9982142857142857
- Micro Recall: 0.9976190476190476
- Weighted Recall: 0.9976190476190476
|
sasha/dog-food-convnext-tiny-224
|
sasha
| 2022-06-22T13:56:32Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"huggingpics",
"dataset:sasha/dog-food",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-21T14:10:13Z |
---
tags:
- image-classification
- pytorch
- huggingpics
datasets:
- sasha/dog-food
metrics:
- accuracy
- f1
model-index:
- name: dog-food-convnext-tiny-224
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dog Food
type: sasha/dog-food
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# dog-food-convnext-tiny-224
This model was trained on the `train` split of the [Dogs vs Food](https://huggingface.co/datasets/sasha/dog-food) dataset -- try training your own using the
[the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb)!
## Example Images
#### dog

#### food

|
flood/xlm-roberta-base-finetuned-panx-all
|
flood
| 2022-06-22T13:50:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T17:31:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1739
- F1: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1894 | 0.8104 |
| 0.1564 | 2.0 | 1670 | 0.1751 | 0.8423 |
| 0.1032 | 3.0 | 2505 | 0.1739 | 0.8525 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
flood/xlm-roberta-base-finetuned-panx-en
|
flood
| 2022-06-22T13:43:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T17:25:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6777777777777778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4025
- F1: 0.6778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1069 | 1.0 | 50 | 0.5201 | 0.5010 |
| 0.4975 | 2.0 | 100 | 0.4503 | 0.6198 |
| 0.3705 | 3.0 | 150 | 0.4025 | 0.6778 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Massinissa/Jeux2BERT
|
Massinissa
| 2022-06-22T12:45:12Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"flaubert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-02T15:46:03Z |
# Jeux2BERT
Jeux2BERT is a Flaubert language model augmented by the lexico-semantic network JeuxDeMots.
Thus, this model tries to capture the distributional and relational properties of words, but also tries to discriminate the different relational properties between words or syntagms.
The Web application includes three Tasks : Link Prediction (Classification de triplets), Relation Prediction (Prédiction de Relation) and Triple Ranking (Classement de triplets)
# Web App
[https://github.com/atmani-massinissa/Jeux2BERT_APP/tree/main]
# Demo
[https://share.streamlit.io/atmani-massinissa/jeux2bert_app/main/app.py?page=Classement+de+triplets]
The task Triple Ranking (Classement de triplets) don't run smoothly on the streamlit server because of the inference's time, so it's better to run it locally instead on the demo's server.
|
ml4pubmed/xtremedistil-l12-h384-uncased_pub_section
|
ml4pubmed
| 2022-06-22T12:29:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"document sections",
"sentence classification",
"document classification",
"medical",
"health",
"biomedical",
"en",
"dataset:pubmed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-04T01:32:45Z |
---
language:
- en
datasets:
- pubmed
metrics:
- f1
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# xtremedistil-l12-h384-uncased_pub_section
- original model file name: textclassifer_xtremedistil-l12-h384-uncased_pubmed_20k
- This is a fine-tuned checkpoint of `microsoft/xtremedistil-l12-h384-uncased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/xtremedistil-l12-h384-uncased_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_parameters
- date_run: Apr-24-2022_t-12
- huggingface_tag: microsoft/xtremedistil-l12-h384-uncased
|
Mizew/autotrain-avar-1016534299
|
Mizew
| 2022-06-22T12:12:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"translation",
"en",
"es",
"dataset:Mizew/autotrain-data-avar",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-22T11:55:38Z |
---
tags:
- autotrain
- translation
language:
- en
- es
datasets:
- Mizew/autotrain-data-avar
co2_eq_emissions: 0.07815966018818815
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1016534299
- CO2 Emissions (in grams): 0.07815966018818815
## Validation Metrics
- Loss: 0.9978321194648743
- SacreBLEU: 13.8459
- Gen len: 6.0588
|
Corianas/qrdqn-3Frame-SpaceInvadersNoFrameskip_1.best
|
Corianas
| 2022-06-22T10:31:17Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T07:14:05Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 1855.50 +/- 869.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
[W&B report](https://wandb.ai/corianas/sb3/reports/QRDQN-Agent-playing-SpaceInvadersNoFrameskip-v4--VmlldzoyMjA4NDk4)
There is a longer video of this agent playing at [Youtube](https://youtu.be/OmxWdSx0ouY)
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 3),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
Corianas/qrdqn-3Frame-SpaceInvadersNoFrameskip_1
|
Corianas
| 2022-06-22T10:29:37Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T07:11:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 1582.00 +/- 771.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
[W&B report](https://wandb.ai/corianas/sb3/reports/QRDQN-Agent-playing-SpaceInvadersNoFrameskip-v4--VmlldzoyMjA4NDk4)
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 3),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
ThomasSimonini/MLAgents-Pyramids
|
ThomasSimonini
| 2022-06-22T09:57:13Z | 14 | 2 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-03-16T10:07:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ThomasSimonini/MLAgents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Elron/deberta-v3-large-emotion
|
Elron
| 2022-06-22T09:48:01Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T08:54:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2787 | 0.49 | 100 | 1.1127 | 0.4866 |
| 1.089 | 0.98 | 200 | 0.9668 | 0.7139 |
| 0.9134 | 1.47 | 300 | 0.8720 | 0.7834 |
| 0.8618 | 1.96 | 400 | 0.7726 | 0.7941 |
| 0.686 | 2.45 | 500 | 0.7337 | 0.8209 |
| 0.6333 | 2.94 | 600 | 0.7350 | 0.8235 |
| 0.5765 | 3.43 | 700 | 0.7561 | 0.8235 |
| 0.5502 | 3.92 | 800 | 0.7273 | 0.8476 |
| 0.5049 | 4.41 | 900 | 0.8137 | 0.8102 |
| 0.4695 | 4.9 | 1000 | 0.7581 | 0.8289 |
| 0.4657 | 5.39 | 1100 | 0.8404 | 0.8048 |
| 0.4549 | 5.88 | 1200 | 0.7800 | 0.8369 |
| 0.4305 | 6.37 | 1300 | 0.8575 | 0.8235 |
| 0.4209 | 6.86 | 1400 | 0.8572 | 0.8102 |
| 0.3983 | 7.35 | 1500 | 0.8392 | 0.8316 |
| 0.4139 | 7.84 | 1600 | 0.8152 | 0.8209 |
| 0.393 | 8.33 | 1700 | 0.8261 | 0.8289 |
| 0.3979 | 8.82 | 1800 | 0.8328 | 0.8235 |
| 0.3928 | 9.31 | 1900 | 0.8364 | 0.8209 |
| 0.3848 | 9.8 | 2000 | 0.8322 | 0.8235 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Elron/deberta-v3-large-offensive
|
Elron
| 2022-06-22T09:47:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T08:56:09Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6417 | 0.27 | 100 | 0.6283 | 0.6533 |
| 0.5105 | 0.54 | 200 | 0.4588 | 0.7915 |
| 0.4554 | 0.81 | 300 | 0.4500 | 0.7968 |
| 0.4212 | 1.08 | 400 | 0.4773 | 0.7938 |
| 0.4054 | 1.34 | 500 | 0.4311 | 0.7983 |
| 0.3922 | 1.61 | 600 | 0.4588 | 0.7998 |
| 0.3776 | 1.88 | 700 | 0.4367 | 0.8066 |
| 0.3535 | 2.15 | 800 | 0.4675 | 0.8074 |
| 0.33 | 2.42 | 900 | 0.4874 | 0.8021 |
| 0.3113 | 2.69 | 1000 | 0.4949 | 0.8044 |
| 0.3203 | 2.96 | 1100 | 0.4550 | 0.8059 |
| 0.248 | 3.23 | 1200 | 0.4858 | 0.8036 |
| 0.2478 | 3.49 | 1300 | 0.5299 | 0.8029 |
| 0.2371 | 3.76 | 1400 | 0.5013 | 0.7991 |
| 0.2388 | 4.03 | 1500 | 0.5520 | 0.8021 |
| 0.1744 | 4.3 | 1600 | 0.6687 | 0.7915 |
| 0.1788 | 4.57 | 1700 | 0.7560 | 0.7689 |
| 0.1652 | 4.84 | 1800 | 0.6985 | 0.7832 |
| 0.1596 | 5.11 | 1900 | 0.7191 | 0.7915 |
| 0.1214 | 5.38 | 2000 | 0.9097 | 0.7893 |
| 0.1432 | 5.64 | 2100 | 0.9184 | 0.7787 |
| 0.1145 | 5.91 | 2200 | 0.9620 | 0.7878 |
| 0.1069 | 6.18 | 2300 | 0.9489 | 0.7893 |
| 0.1012 | 6.45 | 2400 | 1.0107 | 0.7817 |
| 0.0942 | 6.72 | 2500 | 1.0021 | 0.7885 |
| 0.087 | 6.99 | 2600 | 1.1090 | 0.7915 |
| 0.0598 | 7.26 | 2700 | 1.1735 | 0.7795 |
| 0.0742 | 7.53 | 2800 | 1.1433 | 0.7817 |
| 0.073 | 7.79 | 2900 | 1.1343 | 0.7953 |
| 0.0553 | 8.06 | 3000 | 1.2258 | 0.7840 |
| 0.0474 | 8.33 | 3100 | 1.2461 | 0.7817 |
| 0.0515 | 8.6 | 3200 | 1.2996 | 0.7825 |
| 0.0551 | 8.87 | 3300 | 1.2819 | 0.7855 |
| 0.0541 | 9.14 | 3400 | 1.2808 | 0.7855 |
| 0.0465 | 9.41 | 3500 | 1.3398 | 0.7817 |
| 0.0407 | 9.68 | 3600 | 1.3231 | 0.7825 |
| 0.0343 | 9.94 | 3700 | 1.3330 | 0.7825 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
asnorkin/q-FrozenLake-v1-4x4-noSlippery
|
asnorkin
| 2022-06-22T09:22:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T09:22:00Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
kktoto/tiny_no_focal_v2
|
kktoto
| 2022-06-22T08:50:37Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-22T06:39:14Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_no_focal_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_no_focal_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1314
- Precision: 0.7013
- Recall: 0.6837
- F1: 0.6924
- Accuracy: 0.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1574 | 1.0 | 5561 | 0.1471 | 0.6907 | 0.6186 | 0.6527 | 0.9462 |
| 0.1456 | 2.0 | 11122 | 0.1396 | 0.6923 | 0.6473 | 0.6690 | 0.9485 |
| 0.1412 | 3.0 | 16683 | 0.1373 | 0.6845 | 0.6705 | 0.6774 | 0.9490 |
| 0.1338 | 4.0 | 22244 | 0.1343 | 0.6988 | 0.6640 | 0.6810 | 0.9505 |
| 0.1311 | 5.0 | 27805 | 0.1342 | 0.6971 | 0.6751 | 0.6859 | 0.9510 |
| 0.1289 | 6.0 | 33366 | 0.1324 | 0.7081 | 0.6653 | 0.6860 | 0.9517 |
| 0.1258 | 7.0 | 38927 | 0.1309 | 0.7053 | 0.6731 | 0.6888 | 0.9521 |
| 0.1223 | 8.0 | 44488 | 0.1325 | 0.7001 | 0.6818 | 0.6908 | 0.9519 |
| 0.1213 | 9.0 | 50049 | 0.1316 | 0.7020 | 0.6813 | 0.6915 | 0.9522 |
| 0.1197 | 10.0 | 55610 | 0.1314 | 0.7013 | 0.6837 | 0.6924 | 0.9522 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
unity/ML-Agents-Worm
|
unity
| 2022-06-22T08:25:30Z | 0 | 1 |
ml-agents
|
[
"ml-agents",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2022-06-22T06:51:06Z |
---
license: apache-2.0
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: unity/ML-Agents-Worm
3. Step 2: Select your *.nn or *.onnx file
4. Click on Watch the agent play 👀
|
unity/ML-Agents-Walker
|
unity
| 2022-06-22T08:24:57Z | 0 | 7 |
ml-agents
|
[
"ml-agents",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Walker",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2022-06-22T07:07:20Z |
---
license: apache-2.0
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Walker
library_name: ml-agents
---
# **ppo** Agent playing **Walker**
This is a trained model of a **ppo** agent playing **Walker** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Walker
2. Step 1: Write your model_id: unity/ML-Agents-Walker
3. Step 2: Select your *.nn or *.onnx file
4. Click on Watch the agent play 👀
|
merve/text_image_dual_encoder
|
merve
| 2022-06-22T08:17:42Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-22T08:17:04Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
shahma/distilbert-base-uncased-finetuned-squad
|
shahma
| 2022-06-22T07:22:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-22T02:02:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
shpotes/codegen-350M-mono
|
shpotes
| 2022-06-22T06:02:10Z | 17 | 3 |
transformers
|
[
"transformers",
"pytorch",
"codegen",
"text-generation",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T06:37:21Z |
---
license: bsd-3-clause
---
# Overview
The CodeGen model was proposed in by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. From Salesforce Research.
The abstract from the paper is the following:
Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We plan to make the training library JaxFormer including checkpoints available as open source.
# How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("shpotes/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono", trust_remote_code=True)
input_ids = tokenizer(
context,
truncation=True,
padding=True,
return_tensors='pt',
pad_token_id=pad_token_id,
).input_ids
input_ids_len = input_ids.shape[1]
with torch.no_grad():
input_ids = input_ids
tokens = model.generate(
input_ids,
do_sample=True,
num_return_sequences=num_return_sequences,
temperature=temp,
max_length=input_ids_len + max_length_sample,
top_p=top_p,
use_cache=True,
)
text = tokenizer.batch_decode(tokens[:, input_ids_len:, ...])
```
|
Suva/uptag-url-model-v2
|
Suva
| 2022-06-22T05:48:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:arxiv",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-17T04:46:15Z |
---
datasets:
- arxiv
widget:
- text: "summarize: We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production
machinelearning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and
handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks.
In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year,
Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing.
In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors
1.7-2.9 times versus production systems."
license: mit
---
## Usage:
```python
abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production
machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and
handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a
set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks.
In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year,
Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time,
Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems.
"""
```
### Using Transformers🤗
```python
model_name = "Suva/uptag-url-model-v2"
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True)
generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=100,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
print(preds)
# output
["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers",
"Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems",
"Overton: Building, Monitoring, and Improving Production Machine Learning Systems"]
```
|
RuiqianLi/Malaya-speech_fine-tune_realcase_22_Jun
|
RuiqianLi
| 2022-06-22T04:55:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-22T04:11:45Z |
---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: Malaya-speech_fine-tune_realcase_22_Jun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malaya-speech_fine-tune_realcase_22_Jun
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9569
- Wer: 0.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6913 | 20.0 | 100 | 0.9569 | 0.4062 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
vanichandna/xlm-roberta-finetuned-squad
|
vanichandna
| 2022-06-22T04:49:42Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-07T09:42:26Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: vanichandna/xlmroberta-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vanichandna/xlmroberta-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an SQuAD v1.1 dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6636
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16476, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2842 | 0 |
| 0.8425 | 1 |
| 0.6636 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
veb/twitch-distilbert-base-cased-finetuned
|
veb
| 2022-06-22T04:24:36Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-22T04:18:29Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: veb/twitch-distilbert-base-cased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# veb/twitch-distilbert-base-cased-finetuned
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.5140
- Validation Loss: 5.4524
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -982, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.5140 | 5.4524 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.7.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-finetuned-squad
|
Evelyn18
| 2022-06-22T03:50:33Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-08T22:17:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.5219 |
| No log | 2.0 | 10 | 4.9747 |
| No log | 3.0 | 15 | 4.5448 |
| No log | 4.0 | 20 | 4.1843 |
| No log | 5.0 | 25 | 3.8491 |
| No log | 6.0 | 30 | 3.6789 |
| No log | 7.0 | 35 | 3.5018 |
| No log | 8.0 | 40 | 3.4254 |
| No log | 9.0 | 45 | 3.4566 |
| No log | 10.0 | 50 | 3.4326 |
| No log | 11.0 | 55 | 3.5741 |
| No log | 12.0 | 60 | 3.5260 |
| No log | 13.0 | 65 | 3.7003 |
| No log | 14.0 | 70 | 3.7499 |
| No log | 15.0 | 75 | 3.7961 |
| No log | 16.0 | 80 | 3.8578 |
| No log | 17.0 | 85 | 3.9928 |
| No log | 18.0 | 90 | 4.0305 |
| No log | 19.0 | 95 | 4.0024 |
| No log | 20.0 | 100 | 4.0087 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
csukuangfj/sherpa-long-audio-test-data
|
csukuangfj
| 2022-06-22T03:24:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-22T03:22:53Z |
# Introduction
Long sound files for testing streaming ASR in [sherpa](https://github.com/k2-fsa/sherpa).
|
heriosousa/dqn-SpaceInvadersNoFrameskip-v4
|
heriosousa
| 2022-06-22T03:16:32Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T03:15:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 653.00 +/- 141.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga heriosousa -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga heriosousa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4
|
gary109
| 2022-06-22T02:22:03Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-21T09:18:30Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- Wer: 0.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1335 | 1.0 | 138 | 0.4256 | 0.1605 |
| 0.1288 | 2.0 | 276 | 0.4234 | 0.1602 |
| 0.1278 | 3.0 | 414 | 0.4243 | 0.1597 |
| 0.1345 | 4.0 | 552 | 0.4231 | 0.1597 |
| 0.1344 | 5.0 | 690 | 0.4246 | 0.1597 |
| 0.1237 | 6.0 | 828 | 0.4279 | 0.1595 |
| 0.1109 | 7.0 | 966 | 0.4354 | 0.1573 |
| 0.1247 | 8.0 | 1104 | 0.4318 | 0.1570 |
| 0.1372 | 9.0 | 1242 | 0.4341 | 0.1573 |
| 0.1256 | 10.0 | 1380 | 0.4328 | 0.1575 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
lucianpopa/autotrain-qn-classification-1015534072
|
lucianpopa
| 2022-06-21T22:26:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:lucianpopa/autotrain-data-qn-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-21T22:23:01Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucianpopa/autotrain-data-qn-classification
co2_eq_emissions: 0.013170440014043236
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1015534072
- CO2 Emissions (in grams): 0.013170440014043236
## Validation Metrics
- Loss: 1.493847370147705
- Accuracy: 0.7333333333333333
- Macro F1: 0.6777777777777777
- Micro F1: 0.7333333333333333
- Weighted F1: 0.6777777777777777
- Macro Precision: 0.6555555555555554
- Micro Precision: 0.7333333333333333
- Weighted Precision: 0.6555555555555554
- Macro Recall: 0.7333333333333333
- Micro Recall: 0.7333333333333333
- Weighted Recall: 0.7333333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucianpopa/autotrain-qn-classification-1015534072
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autotrain-qn-classification-1015534072", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autotrain-qn-classification-1015534072", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
S2312dal/M1_MLM_cross
|
S2312dal
| 2022-06-21T21:31:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T19:52:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M1_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M1_MLM_cross
This model is a fine-tuned version of [S2312dal/M1_MLM](https://huggingface.co/S2312dal/M1_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0106
- Pearson: 0.9723
- Spearmanr: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0094 | 1.0 | 131 | 0.0342 | 0.9209 | 0.8739 |
| 0.0091 | 2.0 | 262 | 0.0157 | 0.9585 | 0.9040 |
| 0.0018 | 3.0 | 393 | 0.0106 | 0.9723 | 0.9112 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
UrukHan/wav2vec2-ru
|
UrukHan
| 2022-06-21T21:19:43Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-21T07:11:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-ru
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5468
- Wer: 0.4124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.511 | 0.21 | 1000 | 0.5444 | 0.4183 |
| 0.5021 | 0.43 | 2000 | 0.5727 | 0.4112 |
| 0.4746 | 0.64 | 3000 | 0.5495 | 0.4116 |
| 0.5052 | 0.85 | 4000 | 0.5468 | 0.4124 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lafias/dataset-references
|
lafias
| 2022-06-21T20:54:25Z | 0 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:cc",
"model-index",
"region:us"
] |
token-classification
| 2022-05-31T17:09:29Z |
---
inference: false
language:
- en
license: cc # license from https://hf.co/docs/hub/repositories-licenses
library_name: spacy # library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
tags:
- spacy
- token-classification
model-index:
- name: dataset-references
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.85
- name: NER Recall
type: recall
value: 0.88
- name: NER F Score
type: f_score
value: 0.87
---
| Feature | Description |
| --- | --- |
| **Name** | `dataset-references` |
| **Version** | n/a |
| **spaCy** | `3.1.1` |
| **Components** | `transformer`, `ner` |
| **License** | `CC` |
| **Author** | [Sara Lafia](saralafia.com) |
|
deepesh0x/autotrain-mlsec-1013333726
|
deepesh0x
| 2022-06-21T20:49:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"julien",
"text-classification",
"autotrain",
"en",
"dataset:deepesh0x/autotrain-data-mlsec",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-21T16:55:28Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-mlsec
co2_eq_emissions: 33.183779535405364
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1013333726
- CO2 Emissions (in grams): 33.183779535405364
## Validation Metrics
- Loss: 0.1998898833990097
- Accuracy: 0.9226923076923077
- Precision: 0.9269808389435525
- Recall: 0.9177134068187645
- AUC: 0.9785380985232148
- F1: 0.9223238438747907
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-mlsec-1013333726
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-mlsec-1013333726", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-mlsec-1013333726", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
QuentinKemperino/ECHR_test_2
|
QuentinKemperino
| 2022-06-21T20:44:10Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:lex_glue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T14:24:02Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_2 Task A
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1998
- Macro-f1: 0.5295
- Micro-f1: 0.6157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2142 | 0.44 | 500 | 0.2887 | 0.2391 | 0.4263 |
| 0.172 | 0.89 | 1000 | 0.2672 | 0.2908 | 0.4628 |
| 0.1737 | 1.33 | 1500 | 0.2612 | 0.3657 | 0.5102 |
| 0.1581 | 1.78 | 2000 | 0.2412 | 0.3958 | 0.5468 |
| 0.1509 | 2.22 | 2500 | 0.2264 | 0.3950 | 0.5552 |
| 0.1606 | 2.67 | 3000 | 0.2342 | 0.4006 | 0.5511 |
| 0.1491 | 3.11 | 3500 | 0.2176 | 0.4558 | 0.5622 |
| 0.1392 | 3.56 | 4000 | 0.2454 | 0.4128 | 0.5596 |
| 0.15 | 4.0 | 4500 | 0.2113 | 0.4684 | 0.5874 |
| 0.1461 | 4.44 | 5000 | 0.2179 | 0.4631 | 0.5815 |
| 0.1457 | 4.89 | 5500 | 0.2151 | 0.4805 | 0.5949 |
| 0.1443 | 5.33 | 6000 | 0.2155 | 0.5123 | 0.5917 |
| 0.1279 | 5.78 | 6500 | 0.2131 | 0.4915 | 0.5998 |
| 0.1377 | 6.22 | 7000 | 0.2244 | 0.4705 | 0.5944 |
| 0.1242 | 6.67 | 7500 | 0.2150 | 0.5089 | 0.5918 |
| 0.1222 | 7.11 | 8000 | 0.2045 | 0.4801 | 0.5981 |
| 0.1372 | 7.56 | 8500 | 0.2074 | 0.5317 | 0.5962 |
| 0.1289 | 8.0 | 9000 | 0.2035 | 0.5323 | 0.6126 |
| 0.1295 | 8.44 | 9500 | 0.2058 | 0.5213 | 0.6073 |
| 0.123 | 8.89 | 10000 | 0.2027 | 0.5486 | 0.6135 |
| 0.1335 | 9.33 | 10500 | 0.1984 | 0.5442 | 0.6249 |
| 0.1258 | 9.78 | 11000 | 0.1998 | 0.5295 | 0.6157 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ArthurZ/opt-125m
|
ArthurZ
| 2022-06-21T20:29:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-17T13:13:13Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: opt-125m
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# opt-125m
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- TensorFlow 2.9.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.