modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gauthamk28/a2c-AntBulletEnv-v0
|
gauthamk28
| 2023-03-20T09:11:53Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T09:10:51Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1351.37 +/- 564.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
uladkaz/q-Taxi-v3
|
uladkaz
| 2023-03-20T09:08:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T08:59:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.66
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="uladkaz/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JoBuettner/q-FrozenLake-v1-4x4-noSlippery
|
JoBuettner
| 2023-03-20T09:07:30Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T09:07:28Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JoBuettner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
awsgcptest/test_model
|
awsgcptest
| 2023-03-20T09:01:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-20T08:48:38Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0423
- Accuracy: 0.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0863 | 1.0 | 1200 | 0.0551 | 0.9875 |
| 0.0306 | 2.0 | 2400 | 0.0423 | 0.9906 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
oliverguhr/fullstop-punctuation-multilingual-sonar-base
|
oliverguhr
| 2023-03-20T08:59:42Z | 1,297 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"punctuation prediction",
"punctuation",
"en",
"de",
"fr",
"it",
"nl",
"multilingual",
"dataset:wmt/europarl",
"dataset:SoNaR",
"arxiv:2301.03319",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-17T08:01:56Z |
---
language:
- en
- de
- fr
- it
- nl
- multilingual
tags:
- punctuation prediction
- punctuation
datasets:
- wmt/europarl
- SoNaR
license: mit
widget:
- text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
example_title: "Italian"
- text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre"
example_title: "French"
- text: "Ist das eine Frage Frau Müller"
example_title: "German"
- text: "My name is Clara and I live in Berkeley California"
example_title: "English"
- text: "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
example_title: "Dutch"
metrics:
- f1
---
This model predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language.
This multilanguage model was trained on the [Europarl Dataset](https://huggingface.co/datasets/wmt/europarl) provided by the [SEPP-NLG Shared Task](https://sites.google.com/view/sentence-segmentation) and for the Dutch language we included the [SoNaR Dataset](http://hdl.handle.net/10032/tm-a2-h5). *Please note that this dataset consists of political speeches. Therefore the model might perform differently on texts from other domains.*
The model restores the following punctuation markers: **"." "," "?" "-" ":"**
## Sample Code
We provide a simple python package that allows you to process text of any length.
## Install
To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
```bash
pip install deepmultilingualpunctuation
```
### Restore Punctuation
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
result = model.restore_punctuation(text)
print(result)
```
**output**
> My name is Clara and I live in Berkeley, California. Ist das eine Frage, Frau Müller?
### Predict Labels
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
clean_text = model.preprocess(text)
labled_words = model.predict(clean_text)
print(labled_words)
```
**output**
> [['My', '0', 0.99998856], ['name', '0', 0.9999708], ['is', '0', 0.99975926], ['Clara', '0', 0.6117834], ['and', '0', 0.9999014], ['I', '0', 0.9999808], ['live', '0', 0.9999666], ['in', '0', 0.99990165], ['Berkeley', ',', 0.9941764], ['California', '.', 0.9952892], ['Ist', '0', 0.9999577], ['das', '0', 0.9999678], ['eine', '0', 0.99998224], ['Frage', ',', 0.9952265], ['Frau', '0', 0.99995995], ['Müller', '?', 0.972517]]
## Results
The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for the different languages:
| Label | English | German | French|Italian| Dutch |
| ------------- | -------- | ------ | ----- | ----- | ----- |
| 0 | 0.990 | 0.996 | 0.991 | 0.988 | 0.994 |
| . | 0.924 | 0.951 | 0.921 | 0.917 | 0.959 |
| ? | 0.825 | 0.829 | 0.800 | 0.736 | 0.817 |
| , | 0.798 | 0.937 | 0.811 | 0.778 | 0.813 |
| : | 0.535 | 0.608 | 0.578 | 0.544 | 0.657 |
| - | 0.345 | 0.384 | 0.353 | 0.344 | 0.464 |
| macro average | 0.736 | 0.784 | 0.742 | 0.718 | 0.784 |
| micro average | 0.975 | 0.987 | 0.977 | 0.972 | 0.983 |
## Languages
### Models
| Languages | Model |
| ------------------------------------------ | ------------------------------------------------------------ |
| English, Italian, French and German | [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large) |
| English, Italian, French, German and Dutch | [oliverguhr/fullstop-punctuation-multilingual-sonar-base](https://huggingface.co/oliverguhr/fullstop-punctuation-multilingual-sonar-base) |
| Dutch | [oliverguhr/fullstop-dutch-sonar-punctuation-prediction](https://huggingface.co/oliverguhr/fullstop-dutch-sonar-punctuation-prediction) |
### Community Models
| Languages | Model |
| ------------------------------------------ | ------------------------------------------------------------ |
|English, German, French, Spanish, Bulgarian, Italian, Polish, Dutch, Czech, Portugese, Slovak, Slovenian| [kredor/punctuate-all](https://huggingface.co/kredor/punctuate-all) |
| Catalan | [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction) |
You can use different models by setting the model parameter:
```python
model = PunctuationModel(model = "oliverguhr/fullstop-dutch-punctuation-prediction")
```
## How to cite us
```
@article{guhr-EtAl:2021:fullstop,
title={FullStop: Multilingual Deep Models for Punctuation Prediction},
author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim},
booktitle = {Proceedings of the Swiss Text Analytics Conference 2021},
month = {June},
year = {2021},
address = {Winterthur, Switzerland},
publisher = {CEUR Workshop Proceedings},
url = {http://ceur-ws.org/Vol-2957/sepp_paper4.pdf}
}
```
```
@misc{https://doi.org/10.48550/arxiv.2301.03319,
doi = {10.48550/ARXIV.2301.03319},
url = {https://arxiv.org/abs/2301.03319},
author = {Vandeghinste, Vincent and Guhr, Oliver},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
```
|
McCheng/ppo-Pyramids
|
McCheng
| 2023-03-20T08:58:11Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-20T08:58:03Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: McCheng/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
uladkaz/q-FrozenLake-v1-4x4-noSlippery
|
uladkaz
| 2023-03-20T08:56:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T08:56:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="uladkaz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MoritzLaurer/DeBERTa-v3-base-mnli
|
MoritzLaurer
| 2023-03-20T08:33:23Z | 2,585 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# DeBERTa-v3-base-mnli-fever-anli
## Model description
This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-base-mnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
### Training procedure
DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the matched test set and achieves 0.90 accuracy.
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.97&mnli_lp=nan&20_newsgroup=-0.39&ag_news=0.19&amazon_reviews_multi=0.10&anli=1.31&boolq=0.81&cb=8.93&cola=0.01&copa=13.60&dbpedia=-0.23&esnli=-0.51&financial_phrasebank=0.61&imdb=-0.26&isear=-0.35&mnli=-0.34&mrpc=1.24&multirc=1.50&poem_sentiment=-0.19&qnli=0.30&qqp=0.13&rotten_tomatoes=-0.55&rte=3.57&sst2=0.35&sst_5bins=0.39&stsb=1.10&trec_coarse=-0.36&trec_fine=-0.02&tweet_ev_emoji=1.11&tweet_ev_emotion=-0.35&tweet_ev_hate=1.43&tweet_ev_irony=-2.65&tweet_ev_offensive=-1.69&tweet_ev_sentiment=-1.51&wic=0.57&wnli=-2.61&wsc=9.95&yahoo_answers=-0.33&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base.
The model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|-------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
| 86.0196 | 90.6333 | 66.96 | 60.0938 | 83.792 | 83.9286 | 86.5772 | 72 | 79.2 | 91.419 | 85.1 | 94.232 | 71.5124 | 89.4426 | 90.4412 | 63.7583 | 86.5385 | 93.8129 | 91.9144 | 89.8687 | 85.9206 | 95.4128 | 57.3756 | 91.377 | 97.4 | 91 | 47.302 | 83.6031 | 57.6431 | 77.1684 | 83.3721 | 70.2947 | 71.7868 | 67.6056 | 74.0385 | 71.7 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
MoritzLaurer/ernie-m-base-mnli-xnli
|
MoritzLaurer
| 2023-03-20T08:28:54Z | 3,053 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"ernie_m",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2012.15674",
"arxiv:1809.05053",
"arxiv:2111.09543",
"arxiv:1911.02116",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-02-16T14:21:31Z |
---
language:
- multilingual
- en
- ar
- bg
- de
- el
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
license: apache-2.0
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
metrics:
- accuracy
datasets:
- multi_nli
- xnli
pipeline_tag: zero-shot-classification
widget:
- text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels: "politics, economy, entertainment, environment"
---
# Multilingual ernie-m-base-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
zero-shot classification. The underlying model was pre-trained by Baidu, based on Meta's RoBERTa (pre-trained on the
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli),
which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
The model was introduced by Baidu in [this paper](https://arxiv.org/pdf/2012.15674.pdf). The model outperforms RoBERTa models of equal size.
If you are looking for a faster (but less performant) model, you can
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
Among models of equal size, [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli)
performs better on the XNLI benchmark. For better performance,
you can try the slower [ernie-m-large-mnli-xnli](https://huggingface.co/MoritzLaurer/ernie-m-large-mnli-xnli).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/ernie-m-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/ernie-m-base-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained
on the professional translations from the XNLI development set and the original English
MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the
model to the 15 languages; avoids catastrophic forgetting of the other 85 languages ernie-m
was pre-trained on; and significantly reduces training costs.
### Training procedure
ernie-m-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=3e-05,
per_device_train_batch_size=16, # batch size per device during training
gradient_accumulation_steps=2,
per_device_eval_batch_size=16, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
fp16=True,
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training
data in the specific language (cross-lingual transfer). This means that the model is also able of
doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower
than for those languages available in XNLI.
Also note that if other multilingual models on the model hub claim performance of around 90% on languages
other than English, the authors have most likely made a mistake during testing since non of the latest papers
shows a multilingual average performance of more than a few points above 80% on XNLI
(see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|Datasets|avg_xnli|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.78|0.849|0.85|0.777|0.812|0.804|0.797|0.854|0.814|0.803|0.744|0.784|0.711|0.765|0.776|0.717|0.793|0.749|
|Inference text/sec (A100, batch=120)|3310.0|1967.0|1944.0|3443.0|3277.0|3338.0|2884.0|3696.0|3439.0|3071.0|3094.0|3222.0|3445.0|3490.0|3690.0|3175.0|3295.0|3096.0|
## Limitations and bias
Please consult the original ernie-m paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz,
Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine
Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl
or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
## Debugging and issues
The ernie-m architecture is only supported with transformers==4.27 or higher
(which is not yet released and causes an error in the inference widget as of 03.03.23).
In order to run the model before the release of 4.27, you need to install transformers from source with: `pip install git+https://github.com/huggingface/transformers`
as well as the sentencepiece tokenizer with: `pip install sentencepiece`
After the release, you can run: `pip install transformers[sentencepiece]>=4.27`
|
morenolq/thext-ai-scibert
|
morenolq
| 2023-03-20T08:20:42Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"regression",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T07:42:46Z |
---
language: "en"
tags:
- bert
- regression
- pytorch
pipeline:
- text-classification
widget:
- text: "We propose a new approach, based on Transformer-based encoding, to highlight extraction. To the best of our knowledge, this is the first attempt to use transformer architectures to address automatic highlight generation. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "We design a context-aware sentence-level regressor, in which the semantic similarity between candidate sentences and highlights is estimated by also attending the contextual knowledge provided by the other paper sections. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
---
# General Information
This model is trained on journal publications of belonging to the domain: **Artificial Intelligence**.
This is an `allenai/scibert_scivocab_cased` model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).
The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal.
The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
# Usage:
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
# References:
If you find it useful, please cite the following paper:
```bibtex
@article{thext,
title={Transformer-based highlights extraction from scientific papers},
author={La Quatra, Moreno and Cagliero, Luca},
journal={Knowledge-Based Systems},
pages={109382},
year={2022},
publisher={Elsevier}
}
```
|
joheras/clinico-bsc-bio-ehr-es
|
joheras
| 2023-03-20T08:16:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-20T07:37:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: clinico-bsc-bio-ehr-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinico-bsc-bio-ehr-es
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Precision: 0.4916
- Recall: 0.6526
- F1: 0.5608
- Accuracy: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 1.2185 | 0.0189 | 0.0359 | 0.0247 | 0.6197 |
| No log | 2.0 | 50 | 0.7442 | 0.1562 | 0.1975 | 0.1744 | 0.7996 |
| No log | 3.0 | 75 | 0.6502 | 0.2108 | 0.2640 | 0.2344 | 0.8180 |
| No log | 4.0 | 100 | 0.6404 | 0.3453 | 0.4572 | 0.3935 | 0.8258 |
| No log | 5.0 | 125 | 0.6131 | 0.3639 | 0.4657 | 0.4085 | 0.8303 |
| No log | 6.0 | 150 | 0.6123 | 0.3356 | 0.4256 | 0.3752 | 0.8341 |
| No log | 7.0 | 175 | 0.6093 | 0.3411 | 0.4498 | 0.3880 | 0.8370 |
| No log | 8.0 | 200 | 0.6198 | 0.3840 | 0.4931 | 0.4318 | 0.8379 |
| No log | 9.0 | 225 | 0.6490 | 0.3878 | 0.5037 | 0.4382 | 0.8378 |
| No log | 10.0 | 250 | 0.6653 | 0.3810 | 0.5005 | 0.4327 | 0.8371 |
| No log | 11.0 | 275 | 0.6456 | 0.3223 | 0.4847 | 0.3872 | 0.8387 |
| No log | 12.0 | 300 | 0.6475 | 0.3377 | 0.4847 | 0.3981 | 0.8474 |
| No log | 13.0 | 325 | 0.6620 | 0.4004 | 0.5734 | 0.4716 | 0.8506 |
| No log | 14.0 | 350 | 0.6798 | 0.3914 | 0.5649 | 0.4624 | 0.8533 |
| No log | 15.0 | 375 | 0.6880 | 0.3969 | 0.5671 | 0.4670 | 0.8520 |
| No log | 16.0 | 400 | 0.7012 | 0.4192 | 0.5913 | 0.4906 | 0.8551 |
| No log | 17.0 | 425 | 0.7224 | 0.4143 | 0.5924 | 0.4876 | 0.8517 |
| No log | 18.0 | 450 | 0.7510 | 0.4302 | 0.6051 | 0.5029 | 0.8553 |
| No log | 19.0 | 475 | 0.7388 | 0.4271 | 0.6030 | 0.5 | 0.8532 |
| 0.3652 | 20.0 | 500 | 0.7524 | 0.4374 | 0.6125 | 0.5103 | 0.8569 |
| 0.3652 | 21.0 | 525 | 0.7408 | 0.4427 | 0.6082 | 0.5125 | 0.8580 |
| 0.3652 | 22.0 | 550 | 0.7430 | 0.4448 | 0.6125 | 0.5153 | 0.8610 |
| 0.3652 | 23.0 | 575 | 0.7726 | 0.4193 | 0.6093 | 0.4968 | 0.8582 |
| 0.3652 | 24.0 | 600 | 0.7876 | 0.4316 | 0.6061 | 0.5042 | 0.8562 |
| 0.3652 | 25.0 | 625 | 0.7777 | 0.4620 | 0.6294 | 0.5329 | 0.8595 |
| 0.3652 | 26.0 | 650 | 0.8009 | 0.4521 | 0.6272 | 0.5254 | 0.8570 |
| 0.3652 | 27.0 | 675 | 0.8153 | 0.4583 | 0.6378 | 0.5333 | 0.8572 |
| 0.3652 | 28.0 | 700 | 0.8215 | 0.4611 | 0.6262 | 0.5311 | 0.8580 |
| 0.3652 | 29.0 | 725 | 0.8296 | 0.4699 | 0.6336 | 0.5396 | 0.8595 |
| 0.3652 | 30.0 | 750 | 0.8174 | 0.4597 | 0.6378 | 0.5343 | 0.8603 |
| 0.3652 | 31.0 | 775 | 0.8442 | 0.4765 | 0.6410 | 0.5466 | 0.8599 |
| 0.3652 | 32.0 | 800 | 0.8281 | 0.4646 | 0.6315 | 0.5354 | 0.8610 |
| 0.3652 | 33.0 | 825 | 0.8322 | 0.4583 | 0.6389 | 0.5337 | 0.8591 |
| 0.3652 | 34.0 | 850 | 0.8153 | 0.4559 | 0.6272 | 0.528 | 0.8623 |
| 0.3652 | 35.0 | 875 | 0.8529 | 0.4861 | 0.6294 | 0.5486 | 0.8589 |
| 0.3652 | 36.0 | 900 | 0.8826 | 0.4699 | 0.6272 | 0.5373 | 0.8559 |
| 0.3652 | 37.0 | 925 | 0.8856 | 0.4654 | 0.6325 | 0.5363 | 0.8571 |
| 0.3652 | 38.0 | 950 | 0.8983 | 0.4819 | 0.6315 | 0.5466 | 0.8560 |
| 0.3652 | 39.0 | 975 | 0.8723 | 0.4641 | 0.6272 | 0.5335 | 0.8556 |
| 0.0269 | 40.0 | 1000 | 0.8788 | 0.4662 | 0.6399 | 0.5394 | 0.8550 |
| 0.0269 | 41.0 | 1025 | 0.8952 | 0.4805 | 0.6378 | 0.5481 | 0.8611 |
| 0.0269 | 42.0 | 1050 | 0.8901 | 0.4657 | 0.6304 | 0.5357 | 0.8574 |
| 0.0269 | 43.0 | 1075 | 0.9015 | 0.4746 | 0.6410 | 0.5454 | 0.8574 |
| 0.0269 | 44.0 | 1100 | 0.8838 | 0.4655 | 0.6420 | 0.5397 | 0.8591 |
| 0.0269 | 45.0 | 1125 | 0.9093 | 0.4718 | 0.6441 | 0.5446 | 0.8598 |
| 0.0269 | 46.0 | 1150 | 0.9154 | 0.4826 | 0.6441 | 0.5518 | 0.8553 |
| 0.0269 | 47.0 | 1175 | 0.9214 | 0.4614 | 0.6315 | 0.5332 | 0.8538 |
| 0.0269 | 48.0 | 1200 | 0.9313 | 0.4639 | 0.6315 | 0.5349 | 0.8546 |
| 0.0269 | 49.0 | 1225 | 0.9137 | 0.4807 | 0.6431 | 0.5501 | 0.8582 |
| 0.0269 | 50.0 | 1250 | 0.9235 | 0.4939 | 0.6463 | 0.5599 | 0.8571 |
| 0.0269 | 51.0 | 1275 | 0.9263 | 0.4900 | 0.6441 | 0.5566 | 0.8580 |
| 0.0269 | 52.0 | 1300 | 0.9190 | 0.4787 | 0.6420 | 0.5485 | 0.8613 |
| 0.0269 | 53.0 | 1325 | 0.9159 | 0.4700 | 0.6441 | 0.5434 | 0.8616 |
| 0.0269 | 54.0 | 1350 | 0.9302 | 0.4806 | 0.6399 | 0.5489 | 0.8614 |
| 0.0269 | 55.0 | 1375 | 0.9391 | 0.4877 | 0.6515 | 0.5579 | 0.8581 |
| 0.0269 | 56.0 | 1400 | 0.9392 | 0.4959 | 0.6452 | 0.5608 | 0.8580 |
| 0.0269 | 57.0 | 1425 | 0.9444 | 0.4798 | 0.6410 | 0.5488 | 0.8570 |
| 0.0269 | 58.0 | 1450 | 0.9394 | 0.4777 | 0.6441 | 0.5486 | 0.8596 |
| 0.0269 | 59.0 | 1475 | 0.9562 | 0.4833 | 0.6420 | 0.5515 | 0.8586 |
| 0.0098 | 60.0 | 1500 | 0.9485 | 0.4801 | 0.6484 | 0.5517 | 0.8582 |
| 0.0098 | 61.0 | 1525 | 0.9521 | 0.4679 | 0.6463 | 0.5428 | 0.8582 |
| 0.0098 | 62.0 | 1550 | 0.9603 | 0.4759 | 0.6463 | 0.5481 | 0.8563 |
| 0.0098 | 63.0 | 1575 | 0.9663 | 0.4831 | 0.6473 | 0.5532 | 0.8561 |
| 0.0098 | 64.0 | 1600 | 0.9641 | 0.4780 | 0.6526 | 0.5518 | 0.8580 |
| 0.0098 | 65.0 | 1625 | 0.9607 | 0.4767 | 0.6494 | 0.5498 | 0.8606 |
| 0.0098 | 66.0 | 1650 | 0.9782 | 0.4849 | 0.6463 | 0.5541 | 0.8563 |
| 0.0098 | 67.0 | 1675 | 0.9806 | 0.4916 | 0.6484 | 0.5592 | 0.8562 |
| 0.0098 | 68.0 | 1700 | 0.9728 | 0.4889 | 0.6494 | 0.5578 | 0.8578 |
| 0.0098 | 69.0 | 1725 | 0.9766 | 0.4885 | 0.6494 | 0.5576 | 0.8584 |
| 0.0098 | 70.0 | 1750 | 0.9738 | 0.4862 | 0.6526 | 0.5573 | 0.8575 |
| 0.0098 | 71.0 | 1775 | 0.9788 | 0.4916 | 0.6505 | 0.56 | 0.8571 |
| 0.0098 | 72.0 | 1800 | 0.9845 | 0.4845 | 0.6452 | 0.5534 | 0.8563 |
| 0.0098 | 73.0 | 1825 | 0.9729 | 0.4876 | 0.6463 | 0.5559 | 0.8573 |
| 0.0098 | 74.0 | 1850 | 0.9854 | 0.4846 | 0.6494 | 0.5551 | 0.8569 |
| 0.0098 | 75.0 | 1875 | 0.9903 | 0.4885 | 0.6505 | 0.5580 | 0.8562 |
| 0.0098 | 76.0 | 1900 | 0.9825 | 0.4886 | 0.6558 | 0.5600 | 0.8568 |
| 0.0098 | 77.0 | 1925 | 0.9994 | 0.4876 | 0.6463 | 0.5559 | 0.8554 |
| 0.0098 | 78.0 | 1950 | 0.9922 | 0.4905 | 0.6515 | 0.5596 | 0.8546 |
| 0.0098 | 79.0 | 1975 | 1.0084 | 0.4928 | 0.6484 | 0.5600 | 0.8578 |
| 0.0057 | 80.0 | 2000 | 0.9931 | 0.4976 | 0.6526 | 0.5646 | 0.8580 |
| 0.0057 | 81.0 | 2025 | 0.9864 | 0.4826 | 0.6452 | 0.5522 | 0.8595 |
| 0.0057 | 82.0 | 2050 | 0.9929 | 0.4900 | 0.6484 | 0.5582 | 0.8595 |
| 0.0057 | 83.0 | 2075 | 0.9902 | 0.4916 | 0.6473 | 0.5588 | 0.8588 |
| 0.0057 | 84.0 | 2100 | 1.0021 | 0.4872 | 0.6431 | 0.5544 | 0.8573 |
| 0.0057 | 85.0 | 2125 | 1.0013 | 0.4964 | 0.6473 | 0.5619 | 0.8582 |
| 0.0057 | 86.0 | 2150 | 0.9814 | 0.4865 | 0.6484 | 0.5559 | 0.8625 |
| 0.0057 | 87.0 | 2175 | 0.9841 | 0.4932 | 0.6558 | 0.5630 | 0.8622 |
| 0.0057 | 88.0 | 2200 | 0.9888 | 0.4866 | 0.6515 | 0.5571 | 0.8610 |
| 0.0057 | 89.0 | 2225 | 0.9898 | 0.4924 | 0.6515 | 0.5609 | 0.8610 |
| 0.0057 | 90.0 | 2250 | 0.9860 | 0.4870 | 0.6526 | 0.5578 | 0.8607 |
| 0.0057 | 91.0 | 2275 | 0.9925 | 0.4912 | 0.6484 | 0.5589 | 0.8589 |
| 0.0057 | 92.0 | 2300 | 0.9904 | 0.4956 | 0.6536 | 0.5638 | 0.8599 |
| 0.0057 | 93.0 | 2325 | 0.9902 | 0.4980 | 0.6526 | 0.5649 | 0.8602 |
| 0.0057 | 94.0 | 2350 | 0.9925 | 0.5041 | 0.6547 | 0.5696 | 0.8602 |
| 0.0057 | 95.0 | 2375 | 0.9959 | 0.4897 | 0.6515 | 0.5591 | 0.8589 |
| 0.0057 | 96.0 | 2400 | 0.9951 | 0.4901 | 0.6505 | 0.5590 | 0.8591 |
| 0.0057 | 97.0 | 2425 | 0.9962 | 0.4924 | 0.6505 | 0.5605 | 0.8588 |
| 0.0057 | 98.0 | 2450 | 0.9972 | 0.5008 | 0.6505 | 0.5659 | 0.8585 |
| 0.0057 | 99.0 | 2475 | 0.9988 | 0.4920 | 0.6526 | 0.5611 | 0.8588 |
| 0.0045 | 100.0 | 2500 | 0.9988 | 0.4916 | 0.6526 | 0.5608 | 0.8586 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
seongwoon/labor_space_bert
|
seongwoon
| 2023-03-20T08:06:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-20T07:16:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: labor_space_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# labor_space_bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
|
ku-nlp/roberta-base-japanese-char-wwm
|
ku-nlp
| 2023-03-20T08:05:45Z | 674 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-20T05:07:34Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "京都大学で自然言語処理を[MASK]する。"
---
# ku-nlp/roberta-base-japanese-char-wwm
## Model description
This is a Japanese RoBERTa base model pre-trained on Japanese Wikipedia and the Japanese portion of CC-100.
This model is trained with character-level tokenization and whole word masking.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/roberta-base-japanese-char-wwm')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/roberta-base-japanese-char-wwm')
sentence = '京都大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
There is no need to tokenize texts in advance, and you can give raw texts to the tokenizer.
The texts are tokenized into character-level tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 18,377 tokens including all characters that appear in the training corpus.
## Training procedure
This model was trained on Japanese Wikipedia (as of 20220220) and the Japanese portion of CC-100. It took two weeks using 8 NVIDIA A100 GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 1e-4
- per_device_train_batch_size: 62
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 3968
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear schedule with warmup
- training_steps: 330000
- warmup_steps: 10000
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
|
akhooli/poetry2023
|
akhooli
| 2023-03-20T08:05:19Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ar",
"dataset:APCD",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-04T06:06:52Z |
---
language: "ar"
tags:
- text-generation
datasets:
- APCD
widget:
- text: "."
- text: "عيد بأية حال"
- text: "يا قدس"
- text: "يا قدس"
- text: "ألا ليت"
---
# GPT2-Arabic-Poetry-2023
## Model description
Fine-tuned model of Arabic poetry dataset based on aragpt2-medium.
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](todo).
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used the [dataset](todo) from several eras with a total of around 1.4m lines.
The dataset was trained (fine-tuned) based on the [aragpt2-medium](https://huggingface.co/aubmindlab/aragpt2-medium) transformer model.
## Training procedure
Training was done using [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) library on Colab, using free GPU.
## Eval results
Final perplexity reached was 49.56, train loss: 3.336
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2023}
}
|
akhooli/ap2023
|
akhooli
| 2023-03-20T08:04:39Z | 74 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ar",
"dataset:APCD",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-06T11:42:33Z |
---
language: "ar"
tags:
- text-generation
datasets:
- APCD
widget:
- text: "."
- text: "عيد بأية حال"
- text: "يا قدس"
- text: "يا قدس"
- text: "ألا ليت"
---
# GPT2-Arabic-Poetry-2023
## Model description
Fine-tuned model of Arabic poetry dataset based on aragpt2-medium.
## Intended uses & limitations
#### How to use
Try this [HF Space](https://huggingface.co/spaces/akhooli/poetry).
From script:
```
from transformers import pipeline
pipe = pipeline('text-generation', framework='pt', device=-1, model='akhooli/ap2023', tokenizer='akhooli/ap2023')
gen = pipe(prompt, max_length=96, temperature = 0.95,repetition_penalty=1.05,
num_beams=3, num_return_sequences=2, do_sample = True,
top_p = 1.0, top_k = 50, return_full_text=True)[0]["generated_text"]
poetry =""
for line in gen.split('.')[:-1]:
poetry += line
print(poetry)
```
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used poems from several eras with a total of around 1.4M lines (1.25M used for training).
The dataset was trained (fine-tuned) based on the [aragpt2-medium](https://huggingface.co/aubmindlab/aragpt2-medium) transformer model.
## Training procedure
Training was done using HF Trainer using free GPU on Kaggle.
## Eval results
Final perplexity reached was 52, eval_accuracy = 0.3704, eval_loss = 3.9513
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2023}
}
```
|
KBLab/megatron-bert-large-swedish-cased-165k
|
KBLab
| 2023-03-20T08:01:57Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-21T09:38:41Z |
---
language:
- sv
---
# Megatron-BERT-large Swedish 165k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-large with 340M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 165k training steps using a batch size of 8k; the number of training steps is set to 500k, meaning that this version is a checkpoint.
The hyperparameters for training followed the setting for RoBERTa.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
and an earlier checkpoint
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si).
|
loveisp/taxi-v3
|
loveisp
| 2023-03-20T07:33:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T07:33:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="loveisp/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alvarez/rl_course_vizdoom_health_gathering_supreme
|
alvarez
| 2023-03-20T07:25:43Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T07:25:17Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 4.16 +/- 0.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r alvarez/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ldaquan1996/Reinforce-v1
|
ldaquan1996
| 2023-03-20T07:21:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T07:21:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
0xhaz/bert2bert-cnn_dailymail-fp16-finetuned-1.0.0
|
0xhaz
| 2023-03-20T06:58:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-26T10:53:48Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2bert-cnn_dailymail-fp16-finetuned-1.0.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert-cnn_dailymail-fp16-finetuned-1.0.0
This model is a fine-tuned version of [patrickvonplaten/bert2bert-cnn_dailymail-fp16](https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3346
- Rouge1: 46.3609
- Rouge2: 18.8105
- Rougel: 30.215
- Rougelsum: 42.3642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|
| 2.8263 | 1.0 | 586 | 2.4478 | 45.3367 | 18.3604 | 29.713 | 41.2805 |
| 2.1264 | 2.0 | 1172 | 2.3346 | 46.3609 | 18.8105 | 30.215 | 42.3642 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.13.2
|
mr4/phobert-base-vi-sentiment-analysis
|
mr4
| 2023-03-20T06:55:54Z | 191,047 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"Vietnamese",
"sentiment",
"analysis",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T07:10:43Z |
---
language:
- vi
library_name: transformers
pipeline_tag: text-classification
tags:
- Vietnamese
- sentiment
- analysis
---
# Sentiment Analysis in Vietnamese - Phân tích cảm xúc trong tiếng Việt
## Phở Bert phân tích cảm xúc
## Model description
Mô hình có tác dụng xác định cảm xúc của đoạn văn.
Sử dụng nhãn: "Tích cực", "Tiêu cực", "Trung tính"
Ví dụ:
Thời tiết hôm nay không được đẹp, trời mưa và lạnh.
```text
Tiêu cực: 0.9596341252326965
Tích cực: 0.010115462355315685
Trung tính: 0.030250443145632744
```
Hôm nay đi làm thật vui, ăn uống thật ngon.
```text
Tiêu cực: 0.002220266032963991
Tích cực: 0.9917450547218323
Trung tính: 0.006034655496478081
```
Bình thường. Không có gì đặc biệt.
```text
Tiêu cực: 0.03198615834116936
Tích cực: 0.05307402461767197
Trung tính: 0.9149397611618042
```
## Base model
Mô hình được đạo tạo dựa trên cơ sở của model PhoBert-Base của VinAI (https://huggingface.co/vinai/phobert-large)
## Training data
Mô hình được đào tạo dựa trên dữ liệu được thu thập bởi linhlpv (https://www.kaggle.com/datasets/linhlpv/vietnamese-sentiment-analyst) - có chỉnh sửa.
Với 31436 nội dung đánh giá sảm phẩm.
## Model variations
Chưa xác định
## Intended uses & limitations
Chưa xác định
## License
Đây là một open-source library, bạn có thể sử dụng nó với bất kì mục đích nào.
Rất cảm ơn nếu bạn ghi nguồn khi sử dụng mô hình này (nếu không ghi cũng không sao).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import os
def clear():
os.system('clear')
checkpoint = "mr4/phobert-base-vi-sentiment-analysis"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
clear()
print("Ngày hôm nay của bạn thế nào?")
val = input("")
raw_inputs = [val]
inputs = tokenizer(raw_inputs, padding=True,
truncation=True, return_tensors="pt")
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
clear()
print(">>>>>>>>>>>>>>>>>>>>>>>>>>")
for i, prediction in enumerate(predictions):
print(raw_inputs[i])
for j, value in enumerate(prediction):
print(
" " + model.config.id2label[j] + ": " + str(value.item()))
print("<<<<<<<<<<<<<<<<<<<<<<<<<<")
```
## Liên hệ
Mọi thông tin liên quan có thể liên hệ qua email: zZz4everzZz@live.co.uk.
|
mr4/bert-base-jp-sentiment-analysis
|
mr4
| 2023-03-20T06:54:56Z | 30 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment",
"analysis",
"Japanses",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-20T04:45:19Z |
---
language:
- ja
library_name: transformers
pipeline_tag: text-classification
tags:
- sentiment
- analysis
- Japanses
---
# Sentiment Analysis in Japanese - Phân tích cảm xúc trong tiếng Nhật
## Bert phân tích cảm xúc
## Model description
Mô hình có tác dụng xác định cảm xúc của đoạn văn.
Sử dụng nhãn: "positive", "negative"
Ví dụ:
今日はいい天気ですね
```text
negative: 6.001393558108248e-05
positive: 0.999940037727356
```
今日の食べ物はとてもつまらない
```text
negative: 0.9999252557754517
positive: 7.470489799743518e-05
```
## Base model
Mô hình được đạo tạo dựa trên cơ sở của model Base Japanese
## Training data
Mô hình được đào tạo dựa trên dữ liệu được thu thập bởi TAKAHIRO KUBO (https://www.kaggle.com/datasets/takahirokubo0/chabsa) - có chỉnh sửa.
## Model variations
Chưa xác định
## Intended uses & limitations
Chưa xác định
## License
Đây là một open-source library, bạn có thể sử dụng nó với bất kì mục đích nào.
Rất cảm ơn nếu bạn ghi nguồn khi sử dụng mô hình này (nếu không ghi cũng không sao).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import os
def clear():
os.system('clear')
checkpoint = "mr4/bert-base-jp-sentiment-analysis"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
clear()
print("Ngày hôm nay của bạn thế nào?")
val = input("")
raw_inputs = [val]
inputs = tokenizer(raw_inputs, padding=True,
truncation=True, return_tensors="pt")
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
clear()
print(">>>>>>>>>>>>>>>>>>>>>>>>>>")
for i, prediction in enumerate(predictions):
print(raw_inputs[i])
for j, value in enumerate(prediction):
print(
" " + model.config.id2label[j] + ": " + str(value.item()))
print("<<<<<<<<<<<<<<<<<<<<<<<<<<")
```
## Liên hệ
Mọi thông tin liên quan có thể liên hệ qua email: zZz4everzZz@live.co.uk.
|
Raiden-1001/a2c-AntBulletEnv-v0
|
Raiden-1001
| 2023-03-20T06:28:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T06:27:45Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 965.31 +/- 242.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
msp3887/q-Taxi-v3
|
msp3887
| 2023-03-20T06:18:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T06:18:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="msp3887/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dfurman/BEiT-base-land-cover-v0.1
|
dfurman
| 2023-03-20T06:17:37Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-18T18:07:07Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-ches-demo-v0
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9870689655172413
widget:
- src: https://imgs.mongabay.com/wp-content/uploads/sites/20/2020/04/07204605/amazon_coca_01.jpg
example_title: Tree Canopy
- src: https://images.ctfassets.net/nzn0tepgtyr1/4tyavnFHhmNuVky1ISq51k/64aaf596f6b8ee12d0f0e898679c8f4f/Hero_Image.jpg?w=1024&h=710&fl=progressive&q=50&fm=jpg&bg=transparent
example_title: Low Vegetation
- src: https://outline-prod.imgix.net/20170228-YxGtsv8J0ePP0rXcnle2?auto=format&q=60&w=1280&s=27916f48ed9226c2a2b7848de8d7c0d1
example_title: Impervious Surfaces
- src: https://clarity.maptiles.arcgis.com/arcgis/rest/services/World_Imagery/MapServer/tile/15/11883/10109
example_title: Water
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-ches-demo-v0
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0183 | 3.45 | 300 | 0.0420 | 0.9871 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Orreo/Iridescent_painter
|
Orreo
| 2023-03-20T06:05:11Z | 0 | 5 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2023-03-20T05:51:55Z |
---
license: artistic-2.0
---
The description below was created using machine translation
Merged Pastel Mix, oil paint trained model and stable diffusion 1.5 default model.
An oil painting-inspired anime-style model with bright, vibrant colors and a soft brushstroke.
Use the OIL PAINT prompt to blur outlines and make colors more colorful. If you don't use the oil paint prompts, the lines are relatively bold and the colors are a bit muted.

|
CristianLazoQuispe/AIorNot-model
|
CristianLazoQuispe
| 2023-03-20T06:02:23Z | 0 | 0 | null |
[
"image-classification",
"en",
"dataset:competitions/aiornot",
"license:mit",
"region:us"
] |
image-classification
| 2023-03-20T05:25:24Z |
---
license: mit
datasets:
- competitions/aiornot
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: image-classification
---
Fatima 2023 Application
This project is about an image classification task of artificial and natural classes.
Setup:
pip install -r requirements.txt
Inference:
from torchvision import transforms
from PIL import Image
import torch
inference_transform = transforms.Compose([
transforms.Resize(128),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010]),
])
#load image and model
img_example = Image.open("image_example.png").convert('RGB')
print("image loaded!")
model_loaded = torch.load("fatima_challenge_model_exp3.pt")
model_loaded.eval()
print("model loaded!")
img_example_transformed = inference_transform(img_example)
out = model_loaded(img_example_transformed.to(torch.device("cuda:0")).unsqueeze(0)) # Generate predictions
_, outs = torch.max(out, 1)
prediction = "natural" if int(outs.cpu().numpy())==0 else "artificial"
print("prediction = ",prediction)
|
jinukoo/a2c-PandaReachDense-v2
|
jinukoo
| 2023-03-20T06:01:35Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T04:08:43Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.91 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
msp3887/q-FrozenLake-v1-4x4-noSlippery
|
msp3887
| 2023-03-20T05:59:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T05:59:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="msp3887/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pfunk/CartPole-v1-CP_DQPN_x1-seed4
|
pfunk
| 2023-03-20T05:43:27Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:15Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 145.59 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x1]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x1 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x1-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x1-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x1-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x1 --policy-network-frequency 20 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x1',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 20,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
ashishj20/ppo-pyramids-rnd
|
ashishj20
| 2023-03-20T05:41:56Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-20T05:06:35Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: ashishj20/ppo-pyramids-rnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jackhhhh/rl_course_vizdoom_health_gathering_supreme
|
jackhhhh
| 2023-03-20T05:36:35Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T05:36:23Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.91 +/- 0.41
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jackhhhh/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
nobtunotnobutno/Lovely-LORA
|
nobtunotnobutno
| 2023-03-20T05:01:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-20T04:33:37Z |
---
license: creativeml-openrail-m
---
|
luongphamit/DreamShaper
|
luongphamit
| 2023-03-20T04:56:45Z | 14 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-20T01:33:24Z |
---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
inference: true
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars and a heart, which will notify new updates.
Also consider supporting me on Patreon or ByuMeACoffee
- https://www.patreon.com/Lykon275
- https://www.buymeacoffee.com/lykon
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- https://sinkin.ai/m/4zdwGOB
Be sure to check out NeverEnding Dream, which is another semi-realistic model which aims at being fully compatible with booru tag loras and prompts
- https://huggingface.co/Lykon/NeverEnding-Dream
Some sample output:





|
liuyanchen1015/roberta-base-mnli_IndE
|
liuyanchen1015
| 2023-03-20T04:43:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-18T21:36:57Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-mnli_IndE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli_IndE
This model is a fine-tuned version of [WillHeld/roberta-base-mnli](https://huggingface.co/WillHeld/roberta-base-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7633
- Acc: 0.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3903 | 0.17 | 2000 | 0.4502 | 0.8359 |
| 0.3776 | 0.33 | 4000 | 0.4488 | 0.8378 |
| 0.3694 | 0.5 | 6000 | 0.4400 | 0.8408 |
| 0.3679 | 0.67 | 8000 | 0.4412 | 0.8395 |
| 0.3584 | 0.83 | 10000 | 0.4079 | 0.8514 |
| 0.3618 | 1.0 | 12000 | 0.4326 | 0.8433 |
| 0.2582 | 1.17 | 14000 | 0.4738 | 0.8459 |
| 0.2603 | 1.33 | 16000 | 0.4921 | 0.8468 |
| 0.2608 | 1.5 | 18000 | 0.4542 | 0.8498 |
| 0.2591 | 1.67 | 20000 | 0.4709 | 0.8483 |
| 0.263 | 1.83 | 22000 | 0.4955 | 0.8466 |
| 0.2611 | 2.0 | 24000 | 0.4829 | 0.8513 |
| 0.1802 | 2.17 | 26000 | 0.5470 | 0.8493 |
| 0.1819 | 2.33 | 28000 | 0.5523 | 0.8503 |
| 0.1847 | 2.5 | 30000 | 0.5160 | 0.8519 |
| 0.1886 | 2.67 | 32000 | 0.5229 | 0.8521 |
| 0.1877 | 2.83 | 34000 | 0.5024 | 0.8528 |
| 0.1839 | 3.0 | 36000 | 0.5456 | 0.8536 |
| 0.1322 | 3.17 | 38000 | 0.6997 | 0.8492 |
| 0.1385 | 3.33 | 40000 | 0.6212 | 0.8534 |
| 0.1326 | 3.5 | 42000 | 0.6629 | 0.8529 |
| 0.1355 | 3.67 | 44000 | 0.6448 | 0.8516 |
| 0.1332 | 3.83 | 46000 | 0.6411 | 0.8544 |
| 0.1372 | 4.0 | 48000 | 0.6574 | 0.8526 |
| 0.1056 | 4.17 | 50000 | 0.7427 | 0.8529 |
| 0.1053 | 4.33 | 52000 | 0.7466 | 0.8518 |
| 0.1062 | 4.5 | 54000 | 0.7734 | 0.8536 |
| 0.1056 | 4.67 | 56000 | 0.7623 | 0.8518 |
| 0.1072 | 4.83 | 58000 | 0.7633 | 0.8517 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
|
liuyanchen1015/roberta-base-mnli_CollSgE
|
liuyanchen1015
| 2023-03-20T04:42:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-18T21:36:57Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-mnli_CollSgE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli_CollSgE
This model is a fine-tuned version of [WillHeld/roberta-base-mnli](https://huggingface.co/WillHeld/roberta-base-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7610
- Acc: 0.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4123 | 0.17 | 2000 | 0.4693 | 0.8332 |
| 0.4028 | 0.33 | 4000 | 0.4624 | 0.8338 |
| 0.3888 | 0.5 | 6000 | 0.4500 | 0.8375 |
| 0.3841 | 0.67 | 8000 | 0.4281 | 0.8416 |
| 0.3783 | 0.83 | 10000 | 0.4434 | 0.8365 |
| 0.3759 | 1.0 | 12000 | 0.4400 | 0.8418 |
| 0.2721 | 1.17 | 14000 | 0.5022 | 0.8427 |
| 0.2736 | 1.33 | 16000 | 0.5252 | 0.8431 |
| 0.2821 | 1.5 | 18000 | 0.4887 | 0.8409 |
| 0.2802 | 1.67 | 20000 | 0.4758 | 0.8458 |
| 0.2794 | 1.83 | 22000 | 0.4611 | 0.8458 |
| 0.2797 | 2.0 | 24000 | 0.4936 | 0.8456 |
| 0.1915 | 2.17 | 26000 | 0.5545 | 0.8462 |
| 0.1946 | 2.33 | 28000 | 0.5731 | 0.8443 |
| 0.2007 | 2.5 | 30000 | 0.5507 | 0.8428 |
| 0.2008 | 2.67 | 32000 | 0.5499 | 0.8454 |
| 0.1971 | 2.84 | 34000 | 0.5274 | 0.8483 |
| 0.2054 | 3.0 | 36000 | 0.5454 | 0.8476 |
| 0.1436 | 3.17 | 38000 | 0.6787 | 0.8442 |
| 0.1426 | 3.34 | 40000 | 0.6933 | 0.8421 |
| 0.1463 | 3.5 | 42000 | 0.6547 | 0.8455 |
| 0.1447 | 3.67 | 44000 | 0.6469 | 0.8438 |
| 0.1445 | 3.84 | 46000 | 0.6626 | 0.8472 |
| 0.1457 | 4.0 | 48000 | 0.6494 | 0.8504 |
| 0.1133 | 4.17 | 50000 | 0.7664 | 0.8459 |
| 0.1138 | 4.34 | 52000 | 0.7857 | 0.8452 |
| 0.1154 | 4.5 | 54000 | 0.7623 | 0.8486 |
| 0.1102 | 4.67 | 56000 | 0.7740 | 0.8460 |
| 0.1143 | 4.84 | 58000 | 0.7610 | 0.8445 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
|
bkhan2000/LunaLander-v2
|
bkhan2000
| 2023-03-20T04:32:18Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T04:32:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -138.99 +/- 59.23
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
sinu/IndoBERT-ExamQA
|
sinu
| 2023-03-20T04:27:38Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"id",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-26T11:23:13Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: IndoBERT-ExamQA
results: []
datasets:
- squad_v2
language:
- id
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoBERT-ExamQA
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: -
- train_batch_size: -
- eval_batch_size: -
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.395 | 1.0 | 8202 | 1.3536 |
| 1.1534 | 2.0 | 16404 | 1.4040 |
| 1.2816 | 2.0 | 32808 | 1.8183 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
sd-dreambooth-library/fashion
|
sd-dreambooth-library
| 2023-03-20T04:22:59Z | 45 | 24 |
diffusers
|
[
"diffusers",
"dreambooth-hackathon",
"Text-to-image",
"stable-diffusion",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-01T03:31:46Z |
---
license: apache-2.0
tags:
- dreambooth-hackathon
- Text-to-image
- stable-diffusion
pipeline_tag: text-to-image
---
https://civitai.com/models/21642/fashion3d
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("sd-dreambooth-library/fashion")
pipe = pipe.to("cuda")
prompt = "a photograph of an astronaut riding a horse"
image = pipe(prompt).images[0]
image.save(f"astronaut_rides_horse.png")




|
golightly/Reinforce-PixelCopter
|
golightly
| 2023-03-20T04:02:58Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T23:00:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.50 +/- 15.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kejian/cpsc-wmle-1.1
|
kejian
| 2023-03-20T03:52:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-19T04:39:02Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
model-index:
- name: kejian/cpsc-wmle-1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-wmle-1.1
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000 and the tomekkorbak/detoxify-pile-chunk3-1800000-1850000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 42724
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [42724],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [42724],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 2048},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'beta': 1.1, 'exponential': False, 'name': 'WeightedMLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-wmle-1.1',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0007,
'logging_first_step': True,
'logging_steps': 50,
'num_tokens': 2800000000.0,
'output_dir': 'training_output_1.1',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 21362,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1efu1obk
|
58AILab/wenet_efficient_conformer_aishell_v2
|
58AILab
| 2023-03-20T03:45:49Z | 0 | 5 | null |
[
"automatic-speech-recognition",
"en",
"zh",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-03-20T03:43:34Z |
---
license: apache-2.0
language:
- en
- zh
metrics:
- cer
pipeline_tag: automatic-speech-recognition
---
## Efficient Conformer v2 for non-streaming ASR
**Specification**: https://github.com/wenet-e2e/wenet/pull/1636
## Aishell-1 Results
* Feature info:
* using fbank feature, cmvn, speed perturb, dither
* Training info:
* [train_u2++_efficonformer_v2.yaml](https://github.com/wenet-e2e/wenet/blob/main/examples/aishell/s0/conf/train_u2%2B%2B_efficonformer_v2.yaml)
* 8 gpu, batch size 16, acc_grad 1, 200 epochs
* lr 0.001, warmup_steps 25000
* Model info:
* Model Params: 49,354,651
* Downsample rate: 1/2 (conv2d2) * 1/4 (efficonformer block)
* encoder_dim 256, output_size 256, head 8, linear_units 2048
* num_blocks 12, cnn_module_kernel 15, group_size 3
* Decoding info:
* ctc_weight 0.5, reverse_weight 0.3, average_num 20
| decoding mode | full | 18 | 16 |
|------------------------|------|------|------|
| attention decoder | 4.87 | 5.03 | 5.07 |
| ctc prefix beam search | 4.97 | 5.18 | 5.20 |
| attention rescoring | 4.56 | 4.75 | 4.77 |
## Start to Use
Install **WeNet** follow: https://wenet.org.cn/wenet/install.html#install-for-training
Decode
```sh
cd wenet/examples/aishell/s0
dir=exp/wenet_efficient_conformer_aishell_v2/
ctc_weight=0.5
reverse_weight=0.3
decoding_chunk_size=-1
mode="attention_rescoring"
test_dir=$dir/test_${mode}
mkdir -p $test_dir
# Decode
nohup python wenet/bin/recognize.py --gpu 0 \
--mode $mode \
--config $dir/train.yaml \
--data_type "raw" \
--test_data data/test/data.list \
--checkpoint $dir/final.pt \
--beam_size 10 \
--batch_size 1 \
--penalty 0.0 \
--dict $dir/words.txt \
--ctc_weight $ctc_weight \
--reverse_weight $reverse_weight \
--result_file $test_dir/text \
${decoding_chunk_size:+--decoding_chunk_size $decoding_chunk_size} > logs/decode_aishell.log &
# CER
python tools/compute-cer.py --char=1 --v=1 \
data/test/text $test_dir/text > $test_dir/cer.txt
```
|
OpenMatch/ance-tele_triviaqa_qry-encoder
|
OpenMatch
| 2023-03-20T03:32:01Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2210.17167",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-30T07:10:45Z |
---
license: mit
---
This model is the **query** encoder of ANCE-Tele trained on TriviaQA, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE-Tele.
ANCE-Tele only trains with self-mined negatives (teleportation negatives) without using additional negatives (e.g., BM25, other DR systems) and eliminates the dependency on filtering strategies and distillation modules.
|NQ (Test)|R@5|R@20|R@20|
|:---|:---|:---|:---|
|ANCE-Tele|76.9|83.4|87.3|
```
@inproceedings{sun2022ancetele,
title={Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives},
author={Si, Sun and Chenyan, Xiong and Yue, Yu and Arnold, Overwijk and Zhiyuan, Liu and Jie, Bao},
booktitle={Proceedings of EMNLP 2022},
year={2022}
}
```
|
OpenMatch/ance-tele_nq_qry-encoder
|
OpenMatch
| 2023-03-20T03:31:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2210.17167",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-30T07:08:39Z |
---
license: mit
---
This model is the **query** encoder of ANCE-Tele trained on NQ, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE-Tele.
ANCE-Tele only trains with self-mined negatives (teleportation negatives) without using additional negatives (e.g., BM25, other DR systems) and eliminates the dependency on filtering strategies and distillation modules.
|NQ (Test)|R@5|R@20|R@20|
|:---|:---|:---|:---|
|ANCE-Tele|77.0|84.9|89.7|
```
@inproceedings{sun2022ancetele,
title={Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives},
author={Si, Sun and Chenyan, Xiong and Yue, Yu and Arnold, Overwijk and Zhiyuan, Liu and Jie, Bao},
booktitle={Proceedings of EMNLP 2022},
year={2022}
}
```
|
OpenMatch/ance-tele_msmarco_qry-psg-encoder
|
OpenMatch
| 2023-03-20T03:30:39Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2210.17167",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-30T06:53:38Z |
---
license: mit
---
This model is ANCE-Tele trained on MS MARCO, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE-Tele.
ANCE-Tele only trains with self-mined negatives (teleportation negatives) without using additional negatives (e.g., BM25, other DR systems) and eliminates the dependency on filtering strategies and distillation modules.
|MS MARCO (Dev)|MRR@10|R@1K|
|:---|:---|:---|
|ANCE-Tele|39.1|98.4|
```
@inproceedings{sun2022ancetele,
title={Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives},
author={Si, Sun and Chenyan, Xiong and Yue, Yu and Arnold, Overwijk and Zhiyuan, Liu and Jie, Bao},
booktitle={Proceedings of EMNLP 2022},
year={2022}
}
```
|
jackhhhh/Reinforce_Pixelcopter-PLE-v0-2
|
jackhhhh
| 2023-03-20T03:19:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T03:19:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Pixelcopter-PLE-v0-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.80 +/- 21.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pfunk/CartPole-v1-CP_DQPN_x50-seed4
|
pfunk
| 2023-03-20T03:00:45Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T03:00:41Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 346.65 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x50]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x50 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x50-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x50-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x50-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x50 --policy-network-frequency 1000 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x50',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x5-seed1
|
pfunk
| 2023-03-20T03:00:03Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T03:00:00Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x5.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x5]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x5 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x5 --policy-network-frequency 100 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x5',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x5-seed2
|
pfunk
| 2023-03-20T02:59:56Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:51Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.13 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x5.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x5]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x5 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x5 --policy-network-frequency 100 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x5',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x5-seed4
|
pfunk
| 2023-03-20T02:59:40Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:38Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 55.21 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x5.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x5]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x5 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x5 --policy-network-frequency 100 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x5',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x2-seed4
|
pfunk
| 2023-03-20T02:59:33Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:31Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.46 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x2 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x2 --policy-network-frequency 40 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 40,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x2-seed2
|
pfunk
| 2023-03-20T02:59:24Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:21Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 494.32 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x2 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x2 --policy-network-frequency 40 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 40,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x2-seed1
|
pfunk
| 2023-03-20T02:59:17Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:14Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x2 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x2 --policy-network-frequency 40 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 40,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-CP_DQPN_x2-seed3
|
pfunk
| 2023-03-20T02:59:12Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:59:09Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x2 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x2 --policy-network-frequency 40 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 40,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
cpark2/cp_sent_model
|
cpark2
| 2023-03-20T02:37:52Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-19T02:12:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cpark2/cp_sent_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cpark2/cp_sent_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1346
- Train Accuracy: 0.9290
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.2517 | 0.9293 | 0 |
| 0.1346 | 0.9290 | 1 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.10.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
yujiepan/internal.swin-base-food101-int8-structured38.01
|
yujiepan
| 2023-03-20T02:35:20Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"openvino",
"swin",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-20T02:32:30Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: swin-food101-jpqd-1to2r1.5-epo10-finetuned-student
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9183762376237624
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-food101-jpqd-1to2r1.5-epo10-finetuned-student
This model is a fine-tuned version of [skylord/swin-finetuned-food101](https://huggingface.co/skylord/swin-finetuned-food101) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2391
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3011 | 0.42 | 500 | 0.1951 | 0.9124 |
| 0.2613 | 0.84 | 1000 | 0.1897 | 0.9139 |
| 100.1552 | 1.27 | 1500 | 99.5975 | 0.7445 |
| 162.0751 | 1.69 | 2000 | 162.5020 | 0.3512 |
| 1.061 | 2.11 | 2500 | 0.7523 | 0.8550 |
| 0.9728 | 2.54 | 3000 | 0.5263 | 0.8767 |
| 0.5851 | 2.96 | 3500 | 0.4599 | 0.8892 |
| 0.4668 | 3.38 | 4000 | 0.4064 | 0.8938 |
| 0.6967 | 3.8 | 4500 | 0.3814 | 0.8986 |
| 0.4928 | 4.23 | 5000 | 0.3522 | 0.9036 |
| 0.4893 | 4.65 | 5500 | 0.3562 | 0.9026 |
| 0.5421 | 5.07 | 6000 | 0.3182 | 0.9049 |
| 0.4405 | 5.49 | 6500 | 0.3112 | 0.9071 |
| 0.4423 | 5.92 | 7000 | 0.3012 | 0.9092 |
| 0.4143 | 6.34 | 7500 | 0.2958 | 0.9095 |
| 0.4997 | 6.76 | 8000 | 0.2796 | 0.9126 |
| 0.2448 | 7.19 | 8500 | 0.2747 | 0.9124 |
| 0.4468 | 7.61 | 9000 | 0.2699 | 0.9144 |
| 0.4163 | 8.03 | 9500 | 0.2583 | 0.9166 |
| 0.3651 | 8.45 | 10000 | 0.2567 | 0.9165 |
| 0.3946 | 8.88 | 10500 | 0.2489 | 0.9176 |
| 0.3196 | 9.3 | 11000 | 0.2444 | 0.9180 |
| 0.312 | 9.72 | 11500 | 0.2402 | 0.9172 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
yujiepan/internal.swin-base-food101-int8-structured38.63
|
yujiepan
| 2023-03-20T02:31:24Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"openvino",
"swin",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-20T02:28:21Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: swin-food101-jpqd-1to2r1.5-epo7-finetuned-student
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9123960396039604
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-food101-jpqd-1to2r1.5-epo7-finetuned-student
This model is a fine-tuned version of [skylord/swin-finetuned-food101](https://huggingface.co/skylord/swin-finetuned-food101) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2658
- Accuracy: 0.9124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2977 | 0.42 | 500 | 0.1949 | 0.9112 |
| 0.3183 | 0.84 | 1000 | 0.1867 | 0.9144 |
| 99.9552 | 1.27 | 1500 | 99.4882 | 0.7577 |
| 162.4195 | 1.69 | 2000 | 162.7763 | 0.3373 |
| 1.2272 | 2.11 | 2500 | 0.7333 | 0.8564 |
| 1.0236 | 2.54 | 3000 | 0.5016 | 0.8823 |
| 0.6472 | 2.96 | 3500 | 0.4337 | 0.8908 |
| 0.52 | 3.38 | 4000 | 0.3927 | 0.8974 |
| 0.6075 | 3.8 | 4500 | 0.3506 | 0.9011 |
| 0.5348 | 4.23 | 5000 | 0.3425 | 0.9006 |
| 0.444 | 4.65 | 5500 | 0.3268 | 0.9044 |
| 0.5787 | 5.07 | 6000 | 0.3020 | 0.9078 |
| 0.3995 | 5.49 | 6500 | 0.2932 | 0.9095 |
| 0.414 | 5.92 | 7000 | 0.2806 | 0.9104 |
| 0.4386 | 6.34 | 7500 | 0.2738 | 0.9112 |
| 0.452 | 6.76 | 8000 | 0.2673 | 0.9127 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
thefrigidliquidation/pythia-1b-lightnovels
|
thefrigidliquidation
| 2023-03-20T02:21:11Z | 32 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-08T00:03:37Z |
---
license: apache-2.0
language:
- en
- ja
---
# Pythia 1B fine-tuned on Light Novels
This model was fine-tuned on light and web novels. This model was trained for translation, but can do generation too.
This model is a test of using monolingual data to improve translation as well as improving translation by adding similar sentence pairs to prompts.
## English generation
To generate English text with this model, start your prompt with `<|gen_en|>`.
## Japanese generation
To generate Japanese text with this model, start your prompt with `<|gen_ja|>`.
## Japanese to English translation
To translate, format your prompt as such
```
<|tl_ja|>JAPANESE EXAMPLE SENTENCE 1<|tl_en|>ENGLISH EXAMPLE SENTENCE 1<|tl_end|>
<|tl_ja|>JAPANESE EXAMPLE SENTENCE 2<|tl_en|>ENGLISH EXAMPLE SENTENCE 2<|tl_end|>
<|tl_ja|>JAPANESE SENTENCE TO TRANSLATE<|tl_en|>
```
|
jackhhhh/LunarLander-v2-1
|
jackhhhh
| 2023-03-20T02:08:51Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:08:40Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -105.10 +/- 39.46
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jackhhhh/LunarLander-v2-1'
'batch_size': 512
'minibatch_size': 128}
```
|
jinukoo/ppo-PyramidsRND
|
jinukoo
| 2023-03-20T02:07:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:57:39Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: jinukoo/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jjlira/poca-SoccerTwos
|
jjlira
| 2023-03-20T02:00:13Z | 32 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-20T02:00:05Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jjlira/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MakiPan/Reinforce-CartPole-v2
|
MakiPan
| 2023-03-20T01:50:08Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T01:39:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
naeisher/ppo-Pyramids
|
naeisher
| 2023-03-20T01:36:07Z | 37 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-19T21:21:46Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: naeisher/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Surteng/embeddings
|
Surteng
| 2023-03-20T01:27:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-26T01:38:09Z |
---
license: creativeml-openrail-m
---
|
Fred99774/ubervlara
|
Fred99774
| 2023-03-20T01:25:20Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-20T01:14:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ubervlara Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
yonathanstwn/opus-mt-en-id-ccmatrix-lr-5
|
yonathanstwn
| 2023-03-20T01:24:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T21:48:45Z |
---
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: opus-mt-en-id-ccmatrix-lr-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-id
split: train
args: en-id
metrics:
- name: Bleu
type: bleu
value: 65.4357
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-id-ccmatrix-lr-5
This model was trained from scratch on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6093
- Bleu: 65.4357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:------:|:---------------:|:-------:|
| 0.678 | 1.0 | 28125 | 0.6238 | 63.4798 |
| 0.5765 | 2.0 | 56250 | 0.6036 | 64.1162 |
| 0.5375 | 3.0 | 84375 | 0.5953 | 64.4048 |
| 0.5098 | 4.0 | 112500 | 0.5887 | 64.7167 |
| 0.4879 | 5.0 | 140625 | 0.5862 | 64.8577 |
| 0.4696 | 6.0 | 168750 | 0.5855 | 64.9321 |
| 0.4539 | 7.0 | 196875 | 0.5835 | 64.9806 |
| 0.4401 | 8.0 | 225000 | 0.5875 | 65.1012 |
| 0.4279 | 9.0 | 253125 | 0.5864 | 65.1125 |
| 0.4168 | 10.0 | 281250 | 0.5870 | 65.1402 |
| 0.4069 | 11.0 | 309375 | 0.5905 | 65.2012 |
| 0.3977 | 12.0 | 337500 | 0.5905 | 65.3486 |
| 0.3895 | 13.0 | 365625 | 0.5944 | 65.3406 |
| 0.3817 | 14.0 | 393750 | 0.5957 | 65.3218 |
| 0.3749 | 15.0 | 421875 | 0.5978 | 65.3269 |
| 0.3683 | 16.0 | 450000 | 0.5989 | 65.355 |
| 0.3624 | 17.0 | 478125 | 0.6009 | 65.4288 |
| 0.3573 | 18.0 | 506250 | 0.6007 | 65.4001 |
| 0.3525 | 19.0 | 534375 | 0.6035 | 65.4446 |
| 0.3484 | 20.0 | 562500 | 0.6054 | 65.3843 |
| 0.3448 | 21.0 | 590625 | 0.6060 | 65.392 |
| 0.3415 | 22.0 | 618750 | 0.6078 | 65.4052 |
| 0.3388 | 23.0 | 646875 | 0.6082 | 65.3898 |
| 0.3365 | 24.0 | 675000 | 0.6089 | 65.4171 |
| 0.3349 | 25.0 | 703125 | 0.6093 | 65.4357 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
yonathanstwn/opus-mt-en-id-ccmatrix-lr-4
|
yonathanstwn
| 2023-03-20T01:20:32Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T21:48:45Z |
---
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: opus-mt-en-id-ccmatrix-lr-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-id
split: train
args: en-id
metrics:
- name: Bleu
type: bleu
value: 65.4544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-id-ccmatrix-lr-4
This model was trained from scratch on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9115
- Bleu: 65.4544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:------:|:---------------:|:-------:|
| 0.7279 | 1.0 | 28125 | 0.7185 | 61.1164 |
| 0.6185 | 2.0 | 56250 | 0.6849 | 62.0536 |
| 0.5598 | 3.0 | 84375 | 0.6759 | 62.3915 |
| 0.5163 | 4.0 | 112500 | 0.6646 | 62.9303 |
| 0.4795 | 5.0 | 140625 | 0.6665 | 63.3461 |
| 0.4471 | 6.0 | 168750 | 0.6692 | 63.5319 |
| 0.4173 | 7.0 | 196875 | 0.6690 | 63.7436 |
| 0.3897 | 8.0 | 225000 | 0.6739 | 63.8343 |
| 0.3633 | 9.0 | 253125 | 0.6832 | 63.867 |
| 0.3382 | 10.0 | 281250 | 0.6928 | 64.0481 |
| 0.314 | 11.0 | 309375 | 0.7015 | 64.0177 |
| 0.2909 | 12.0 | 337500 | 0.7151 | 64.3563 |
| 0.2687 | 13.0 | 365625 | 0.7265 | 64.2445 |
| 0.2474 | 14.0 | 393750 | 0.7384 | 64.5093 |
| 0.227 | 15.0 | 421875 | 0.7560 | 64.3729 |
| 0.2072 | 16.0 | 450000 | 0.7712 | 64.6396 |
| 0.1888 | 17.0 | 478125 | 0.7876 | 64.805 |
| 0.1713 | 18.0 | 506250 | 0.8052 | 64.7883 |
| 0.1546 | 19.0 | 534375 | 0.8258 | 64.9535 |
| 0.1394 | 20.0 | 562500 | 0.8421 | 64.9885 |
| 0.1251 | 21.0 | 590625 | 0.8593 | 65.1229 |
| 0.112 | 22.0 | 618750 | 0.8757 | 65.2565 |
| 0.1006 | 23.0 | 646875 | 0.8923 | 65.288 |
| 0.0907 | 24.0 | 675000 | 0.9033 | 65.3973 |
| 0.0828 | 25.0 | 703125 | 0.9115 | 65.4544 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
kejian/cpsc-wmle-0.93
|
kejian
| 2023-03-20T01:12:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-19T04:19:15Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
model-index:
- name: kejian/cpsc-wmle-0.93
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-wmle-0.93
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000 and the tomekkorbak/detoxify-pile-chunk3-1800000-1850000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 42724
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [42724],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [42724],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 2048},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'beta': 0.93, 'exponential': False, 'name': 'WeightedMLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-wmle-0.93',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0007,
'logging_first_step': True,
'logging_steps': 50,
'num_tokens': 2800000000.0,
'output_dir': 'training_output_0.93',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 21362,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/bcisua7o
|
LozanoJohan/q-Gym-Taxi
|
LozanoJohan
| 2023-03-20T00:58:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:58:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Gym-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="LozanoJohan/q-Gym-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa
|
vocabtrimmer
| 2023-03-20T00:42:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T22:27:56Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。"
example_title: "Question Answering Example 1"
- text: "question: 1968年に開催されたオリンピックの名前は何ですか?, context: オリンピックが世界的大イベントに成長するに従って政治に左右されるようになると、1968年のメキシコシティ大会では黒人差別を訴える場と化し、1972年のミュンヘン大会ではアラブのゲリラによるイスラエル選手に対するテロ事件まで起きた(ミュンヘンオリンピック事件)。1976年のモントリオール大会になると、ニュージーランドのラグビーチームの南アフリカ遠征に反対してアフリカの諸国22ヶ国がボイコットを行った。そして、1980年のモスクワ大会ではソ連のアフガニスタン侵攻に反発したアメリカ・西ドイツ・日本などの西側諸国が相次いでボイコットを行った。1984年ロサンゼルス大会ではソ連と東側諸国が報復ボイコットを行ない、参加したのはソ連と対立していた中国とルーマニアだけだった。中でも、イラン革命後のイラン・イスラム共和国はモスクワとロサンゼルス双方のオリンピックをボイコットしている。オリンピックが巨大化するに従って財政負担の増大が大きな問題となり、1976年の夏季大会では大幅な赤字を出し、その後夏季・冬季とも立候補都市が1〜2都市だけという状態が続いた。"
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 0.0
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 64.91
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 50.65
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 96.52
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 89.55
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 66.9
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 66.9
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ja-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-90000) for question answering task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-ja-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-90000)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa")
# model prediction
answers = model.answer_q(list_question="新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", list_context=" 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa")
output = pipe("question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 66.9 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 66.9 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 96.52 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 62.61 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 50.65 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 89.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 64.91 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-ja-90000
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 32
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-90000-jaquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
jinukoo/ppo-SnowballTarget
|
jinukoo
| 2023-03-20T00:22:10Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:22:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jinukoo/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
liqi6811/skillsBERT_v3_tf_epoch200_Important
|
liqi6811
| 2023-03-20T00:20:04Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-03-20T00:19:19Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: skillsBERT_v3_epoch200
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# skillsBERT_v3_epoch200
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0256
- Validation Loss: 7.8012
- Epoch: 199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.8416 | 6.7417 | 0 |
| 6.2162 | 6.1436 | 1 |
| 5.3514 | 5.3111 | 2 |
| 4.5931 | 4.9790 | 3 |
| 4.0664 | 4.8477 | 4 |
| 3.6654 | 4.6776 | 5 |
| 3.3343 | 4.5758 | 6 |
| 3.0431 | 4.4659 | 7 |
| 2.7726 | 4.4337 | 8 |
| 2.5106 | 4.6514 | 9 |
| 2.2625 | 4.6512 | 10 |
| 2.0222 | 4.7317 | 11 |
| 1.7924 | 4.6184 | 12 |
| 1.5719 | 4.7085 | 13 |
| 1.3735 | 4.8741 | 14 |
| 1.1820 | 5.0078 | 15 |
| 1.0164 | 5.0224 | 16 |
| 0.8588 | 5.2085 | 17 |
| 0.7247 | 5.2827 | 18 |
| 0.6140 | 5.3904 | 19 |
| 0.5144 | 5.3287 | 20 |
| 0.4330 | 5.4909 | 21 |
| 0.3603 | 5.6482 | 22 |
| 0.3120 | 5.4950 | 23 |
| 0.2634 | 5.8465 | 24 |
| 0.2322 | 5.8744 | 25 |
| 0.2077 | 5.8122 | 26 |
| 0.1904 | 5.9466 | 27 |
| 0.1698 | 6.0747 | 28 |
| 0.1536 | 6.2833 | 29 |
| 0.1462 | 6.2201 | 30 |
| 0.1367 | 6.3858 | 31 |
| 0.1309 | 6.4264 | 32 |
| 0.1196 | 6.3880 | 33 |
| 0.1129 | 6.6965 | 34 |
| 0.1100 | 6.4638 | 35 |
| 0.1050 | 6.4099 | 36 |
| 0.0994 | 6.4061 | 37 |
| 0.0973 | 6.4458 | 38 |
| 0.0934 | 6.5099 | 39 |
| 0.0909 | 6.4002 | 40 |
| 0.0896 | 6.5372 | 41 |
| 0.0839 | 6.5808 | 42 |
| 0.0817 | 6.4682 | 43 |
| 0.0814 | 6.6921 | 44 |
| 0.0793 | 6.7584 | 45 |
| 0.0765 | 6.7847 | 46 |
| 0.0765 | 6.8182 | 47 |
| 0.0712 | 6.7281 | 48 |
| 0.0710 | 6.7083 | 49 |
| 0.0700 | 6.6643 | 50 |
| 0.0695 | 6.7186 | 51 |
| 0.0681 | 6.9158 | 52 |
| 0.0647 | 6.8065 | 53 |
| 0.0662 | 7.0515 | 54 |
| 0.0630 | 6.9353 | 55 |
| 0.0624 | 7.0418 | 56 |
| 0.0640 | 6.7393 | 57 |
| 0.0610 | 7.0111 | 58 |
| 0.0602 | 7.0310 | 59 |
| 0.0577 | 6.7995 | 60 |
| 0.0616 | 6.7364 | 61 |
| 0.0575 | 7.0542 | 62 |
| 0.0532 | 7.1219 | 63 |
| 0.0601 | 6.9904 | 64 |
| 0.0528 | 7.2782 | 65 |
| 0.0551 | 7.2465 | 66 |
| 0.0551 | 7.2380 | 67 |
| 0.0542 | 6.9920 | 68 |
| 0.0536 | 7.1704 | 69 |
| 0.0529 | 7.1467 | 70 |
| 0.0488 | 7.0684 | 71 |
| 0.0494 | 7.0333 | 72 |
| 0.0518 | 7.3027 | 73 |
| 0.0505 | 7.1332 | 74 |
| 0.0481 | 7.0856 | 75 |
| 0.0493 | 7.2170 | 76 |
| 0.0490 | 7.3652 | 77 |
| 0.0480 | 7.3370 | 78 |
| 0.0485 | 7.1336 | 79 |
| 0.0480 | 7.2017 | 80 |
| 0.0483 | 7.2421 | 81 |
| 0.0463 | 7.3675 | 82 |
| 0.0455 | 7.3847 | 83 |
| 0.0441 | 7.3112 | 84 |
| 0.0454 | 7.2941 | 85 |
| 0.0474 | 7.4086 | 86 |
| 0.0451 | 7.1806 | 87 |
| 0.0417 | 7.4458 | 88 |
| 0.0464 | 7.2912 | 89 |
| 0.0422 | 7.6368 | 90 |
| 0.0434 | 7.4060 | 91 |
| 0.0427 | 7.4733 | 92 |
| 0.0433 | 7.4114 | 93 |
| 0.0416 | 7.3643 | 94 |
| 0.0428 | 7.5354 | 95 |
| 0.0426 | 7.2827 | 96 |
| 0.0400 | 7.4285 | 97 |
| 0.0413 | 7.4499 | 98 |
| 0.0422 | 7.4816 | 99 |
| 0.0407 | 7.3491 | 100 |
| 0.0402 | 7.3784 | 101 |
| 0.0412 | 7.3845 | 102 |
| 0.0389 | 7.5468 | 103 |
| 0.0372 | 7.4723 | 104 |
| 0.0421 | 7.4283 | 105 |
| 0.0382 | 7.4074 | 106 |
| 0.0392 | 7.4365 | 107 |
| 0.0399 | 7.4375 | 108 |
| 0.0396 | 7.5146 | 109 |
| 0.0389 | 7.2877 | 110 |
| 0.0384 | 7.3907 | 111 |
| 0.0386 | 7.5558 | 112 |
| 0.0378 | 7.3746 | 113 |
| 0.0359 | 7.5122 | 114 |
| 0.0412 | 7.4631 | 115 |
| 0.0341 | 7.5950 | 116 |
| 0.0380 | 7.3713 | 117 |
| 0.0382 | 7.4232 | 118 |
| 0.0350 | 7.5180 | 119 |
| 0.0374 | 7.4993 | 120 |
| 0.0373 | 7.4308 | 121 |
| 0.0357 | 7.4511 | 122 |
| 0.0364 | 7.5254 | 123 |
| 0.0349 | 7.4326 | 124 |
| 0.0371 | 7.5467 | 125 |
| 0.0344 | 7.5324 | 126 |
| 0.0375 | 7.4660 | 127 |
| 0.0365 | 7.5816 | 128 |
| 0.0348 | 7.5425 | 129 |
| 0.0333 | 7.5655 | 130 |
| 0.0331 | 7.6466 | 131 |
| 0.0369 | 7.6142 | 132 |
| 0.0332 | 7.7292 | 133 |
| 0.0349 | 7.6649 | 134 |
| 0.0343 | 7.5255 | 135 |
| 0.0335 | 7.7736 | 136 |
| 0.0334 | 7.6680 | 137 |
| 0.0356 | 7.4846 | 138 |
| 0.0323 | 7.7691 | 139 |
| 0.0339 | 7.6986 | 140 |
| 0.0333 | 7.4287 | 141 |
| 0.0333 | 7.5534 | 142 |
| 0.0322 | 7.5383 | 143 |
| 0.0333 | 7.5212 | 144 |
| 0.0320 | 7.5945 | 145 |
| 0.0335 | 7.5932 | 146 |
| 0.0332 | 7.7700 | 147 |
| 0.0323 | 7.4798 | 148 |
| 0.0318 | 7.5804 | 149 |
| 0.0336 | 7.5721 | 150 |
| 0.0332 | 7.3627 | 151 |
| 0.0334 | 7.6093 | 152 |
| 0.0293 | 7.7731 | 153 |
| 0.0336 | 7.6722 | 154 |
| 0.0319 | 7.5856 | 155 |
| 0.0325 | 7.6355 | 156 |
| 0.0287 | 7.5941 | 157 |
| 0.0318 | 7.6476 | 158 |
| 0.0304 | 7.5365 | 159 |
| 0.0313 | 7.6429 | 160 |
| 0.0319 | 7.5318 | 161 |
| 0.0311 | 7.7468 | 162 |
| 0.0321 | 7.6332 | 163 |
| 0.0301 | 7.8412 | 164 |
| 0.0292 | 7.6819 | 165 |
| 0.0313 | 7.5544 | 166 |
| 0.0311 | 7.6667 | 167 |
| 0.0274 | 7.7875 | 168 |
| 0.0317 | 7.6632 | 169 |
| 0.0305 | 7.8710 | 170 |
| 0.0311 | 7.5799 | 171 |
| 0.0311 | 7.7357 | 172 |
| 0.0271 | 7.7491 | 173 |
| 0.0317 | 7.8025 | 174 |
| 0.0294 | 7.6856 | 175 |
| 0.0302 | 7.7687 | 176 |
| 0.0293 | 7.8676 | 177 |
| 0.0315 | 7.6371 | 178 |
| 0.0286 | 7.8114 | 179 |
| 0.0288 | 7.6690 | 180 |
| 0.0304 | 7.6712 | 181 |
| 0.0293 | 7.8668 | 182 |
| 0.0305 | 7.8221 | 183 |
| 0.0284 | 7.7506 | 184 |
| 0.0309 | 7.6629 | 185 |
| 0.0282 | 7.7157 | 186 |
| 0.0262 | 7.8241 | 187 |
| 0.0305 | 7.6471 | 188 |
| 0.0288 | 7.6409 | 189 |
| 0.0283 | 7.7386 | 190 |
| 0.0286 | 7.8070 | 191 |
| 0.0284 | 7.7921 | 192 |
| 0.0287 | 7.9042 | 193 |
| 0.0289 | 7.7297 | 194 |
| 0.0276 | 7.8584 | 195 |
| 0.0278 | 7.8580 | 196 |
| 0.0258 | 7.9323 | 197 |
| 0.0306 | 7.7566 | 198 |
| 0.0256 | 7.8012 | 199 |
### Framework versions
- Transformers 4.28.0.dev0
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jackhhhh/Reinforce_Pixelcopter-PLE-v0-1
|
jackhhhh
| 2023-03-20T00:12:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:12:04Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Pixelcopter-PLE-v0-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.40 +/- 0.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jackhhhh/Reinforce_Pixelcopter-PLE-v0
|
jackhhhh
| 2023-03-20T00:03:05Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:01:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.70 +/- 0.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
GamerUntouch/Lovecraft-LLamA-LoRAs
|
GamerUntouch
| 2023-03-20T00:00:33Z | 0 | 4 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-03-19T23:52:02Z |
---
license: apache-2.0
---
LoRAs for LLaMA trained on approximately 2.1 epochs of Lovecraft's entire works.
|
MakiPan/Reinforce-PixelCopter-PLE-v0
|
MakiPan
| 2023-03-19T23:58:51Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:58:44Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.30 +/- 16.08
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
WALIDALI/hallachillout
|
WALIDALI
| 2023-03-19T23:51:33Z | 12 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-19T23:47:18Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### HallaChillout Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pfunk/CartPole-v1-DQN_newww-seed3
|
pfunk
| 2023-03-19T23:48:21Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:48:18Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 494.56 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_newww.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_newww]"
python -m cleanrl_utils.enjoy --exp-name DQN_newww --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed3/raw/main/dqn.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQN_newww --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqn.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQN_newww',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQN_newww-seed4
|
pfunk
| 2023-03-19T23:48:18Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:48:15Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.67 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_newww.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_newww]"
python -m cleanrl_utils.enjoy --exp-name DQN_newww --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed4/raw/main/dqn.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_newww-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQN_newww --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqn.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQN_newww',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
jimregan/whisper-small-sv-riksdag
|
jimregan
| 2023-03-19T23:43:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-01T17:57:25Z |
---
language:
- sv
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Sv - Riksdag 100h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sv - Riksdag 100h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4977
- Wer: 1118.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 0.1384 | 0.11 | 1000 | 0.4747 | 380.8335 |
| 0.1186 | 0.22 | 2000 | 0.4513 | 1032.3900 |
| 0.1056 | 0.33 | 3000 | 0.4385 | 582.0427 |
| 0.0824 | 0.43 | 4000 | 0.4465 | 574.8907 |
| 0.0961 | 0.54 | 5000 | 0.4199 | 1004.9138 |
| 0.0939 | 0.65 | 6000 | 0.4478 | 866.2979 |
| 0.0758 | 0.76 | 7000 | 0.4384 | 907.9496 |
| 0.0741 | 0.87 | 8000 | 0.4264 | 641.1371 |
| 0.0692 | 0.98 | 9000 | 0.4206 | 1142.6550 |
| 0.0257 | 1.08 | 10000 | 0.4707 | 1152.4312 |
| 0.0273 | 1.19 | 11000 | 0.4789 | 1100.2058 |
| 0.021 | 1.3 | 12000 | 0.4763 | 1236.1719 |
| 0.0163 | 1.41 | 13000 | 0.5035 | 924.8006 |
| 0.0183 | 1.52 | 14000 | 0.4911 | 1285.1814 |
| 0.024 | 1.63 | 15000 | 0.4861 | 1140.8284 |
| 0.0158 | 1.73 | 16000 | 0.4793 | 1181.7597 |
| 0.0167 | 1.84 | 17000 | 0.4759 | 1207.3064 |
| 0.0231 | 1.95 | 18000 | 0.4801 | 1139.6964 |
| 0.0054 | 2.06 | 19000 | 0.4934 | 1114.4842 |
| 0.006 | 2.17 | 20000 | 0.4977 | 1118.4718 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
KarosY/lianjia_2l_100per800_2e-4
|
KarosY
| 2023-03-19T23:40:24Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-19T15:40:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/KarosY/lianjia_2l_100per800_2e-4
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
KarosY/lianjia_2l_100per800_1e-4
|
KarosY
| 2023-03-19T23:40:18Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-19T15:39:55Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/KarosY/lianjia_2l_100per800_1e-4
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
kucharskipj/ppo-SnowballTarget
|
kucharskipj
| 2023-03-19T23:32:24Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:32:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: kucharskipj/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KBLab/whisper-large-rixvox
|
KBLab
| 2023-03-19T23:24:34Z | 518 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"sv",
"dataset:KBLab/rixvox",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-13T06:31:06Z |
---
license: apache-2.0
datasets:
- KBLab/rixvox
language:
- sv
---
# Whisper Large RixVox Swedish
This is a [Whisper large](https://huggingface.co/openai/whisper-large-v2) finetuned for Swedish
using the [RixVox](https://huggingface.co/datasets/KBLab/rixvox) dataset.
Please note that this model, as every other encoder-decoder speech-to-text model, is prone to
hallucinating on unexpected inputs and treats the task as translation rather than transcription.
I.e your mileage may vary depending on filtering and type of data.
In this release the entire encoder was frozen. Subsequent releases will not do this **if** the
generalization to other types of data (i.e not parliamentary speeches) is kept when not freezing
the encoder.
## Evaluation (test)
* RixVox WER: `22.59`
* RixVox WER (normalized*): `19.33`
* Common Voice 11 WER: `18.03`
* Common Voice 11 WER (normalized*): `13.23`
* Fleurs WER: `14.26`
* Fleurs WER (normalized*): `8.99`
*) Normalization is done by applying the following to source and generated texts:
```
def normalize(s):
return ' '.join([ x for x in sub('[^0-9a-zåäöA-ZÅÄÖ ]', ' ', s.lower().replace('é', 'e')).split() ])
```
In comparison the original Whisper large gets `30.56`/`25.58`, `18.76`/`15.00`, and `14.53`/`9.19` respectively.
## Training
Training was done using Huggingface and Deepspeed with ZeRO stage 2.
* learning rate: 1e-5
* optimizer: CPUAdamW (Deepspeed)
* lr scheduler: linear
* warmup steps: 500
* per device batch size: 20
* GPUs: 8 x NVIDIA A100 40GB
* total batch size: 160
* steps: 20000
* lowercase: no
* fp16
* entire encoder was frozen
|
Crosbot/marvel_snap_cards
|
Crosbot
| 2023-03-19T23:17:47Z | 0 | 0 | null |
[
"dataset:Crosbot/mv_snp_cards",
"region:us"
] | null | 2023-03-19T23:16:10Z |
---
datasets:
- Crosbot/mv_snp_cards
---
|
k4black/Salesforce-codet5-small-CodeXGLUE-CONCODE-test
|
k4black
| 2023-03-19T23:06:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-19T22:43:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: Salesforce-codet5-small-CodeXGLUE-CONCODE-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Salesforce-codet5-small-CodeXGLUE-CONCODE-test
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8508
- Exact Match: 0.156
- Rouge1: 0.5559
- Rouge2: 0.3857
- Rougel: 0.5378
- Rougelsum: 0.5465
- Bleu: 0.1246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|:------:|:------:|:---------:|:------:|
| 1.3563 | 0.16 | 500 | 1.1652 | 0.1115 | 0.5098 | 0.3191 | 0.4915 | 0.4982 | 0.1088 |
| 0.9656 | 0.32 | 1000 | 1.0435 | 0.1245 | 0.5246 | 0.3444 | 0.5075 | 0.5145 | 0.1164 |
| 0.8627 | 0.48 | 1500 | 0.9851 | 0.121 | 0.5275 | 0.3420 | 0.5074 | 0.5154 | 0.1132 |
| 0.7718 | 0.64 | 2000 | 0.9288 | 0.1385 | 0.5334 | 0.3589 | 0.5174 | 0.5242 | 0.1206 |
| 0.7237 | 0.8 | 2500 | 0.8867 | 0.1495 | 0.5505 | 0.3762 | 0.5328 | 0.5406 | 0.1208 |
| 0.6812 | 0.96 | 3000 | 0.8508 | 0.156 | 0.5559 | 0.3857 | 0.5378 | 0.5465 | 0.1246 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
|
MakiPan/Reinforce-CartPole-v1
|
MakiPan
| 2023-03-19T23:03:59Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:03:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 461.80 +/- 114.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jackhhhh/Pixelcopter-PLE-v0
|
jackhhhh
| 2023-03-19T23:02:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T23:02:23Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.70 +/- 3.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kejian/cpsc-wmle-1.25
|
kejian
| 2023-03-19T22:39:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-19T04:08:36Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
model-index:
- name: kejian/cpsc-wmle-1.25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-wmle-1.25
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000 and the tomekkorbak/detoxify-pile-chunk3-1800000-1850000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 42724
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [42724],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [42724],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 2048},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'beta': 1.25, 'exponential': False, 'name': 'WeightedMLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-wmle-1.25',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0007,
'logging_first_step': True,
'logging_steps': 50,
'num_tokens': 2800000000.0,
'output_dir': 'training_output_1.25',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 21362,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/38oxb2oc
|
namikazi25/DCNN_on_CIFAR_10
|
namikazi25
| 2023-03-19T22:37:48Z | 0 | 0 |
keras
|
[
"keras",
"code",
"en",
"dataset:cifar10",
"license:mit",
"region:us"
] | null | 2023-03-19T22:00:51Z |
---
license: mit
datasets:
- cifar10
language:
- en
metrics:
- accuracy
library_name: keras
tags:
- code
---
|
yonathanstwn/opus-mt-id-en-open-subtitles
|
yonathanstwn
| 2023-03-19T22:34:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:open_subtitles",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-19T01:22:57Z |
---
tags:
- generated_from_trainer
datasets:
- open_subtitles
metrics:
- bleu
model-index:
- name: opus-mt-id-en-open-subtitles
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: open_subtitles
type: open_subtitles
config: en-id
split: train
args: en-id
metrics:
- name: Bleu
type: bleu
value: 36.9382
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-id-en-open-subtitles
This model was trained from scratch on the open_subtitles dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8430
- Bleu: 36.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:------:|:---------------:|:-------:|
| 1.3533 | 1.0 | 28125 | 1.3274 | 37.6662 |
| 1.2814 | 2.0 | 56250 | 1.3525 | 37.5909 |
| 1.2058 | 3.0 | 84375 | 1.3674 | 37.8008 |
| 1.1415 | 4.0 | 112500 | 1.3722 | 37.4849 |
| 1.0842 | 5.0 | 140625 | 1.3943 | 37.7558 |
| 1.0309 | 6.0 | 168750 | 1.3994 | 37.6332 |
| 0.9802 | 7.0 | 196875 | 1.4216 | 37.7529 |
| 0.9316 | 8.0 | 225000 | 1.4304 | 37.9906 |
| 0.8838 | 9.0 | 253125 | 1.4462 | 37.7833 |
| 0.8378 | 10.0 | 281250 | 1.4639 | 37.5971 |
| 0.7921 | 11.0 | 309375 | 1.4859 | 37.6285 |
| 0.7484 | 12.0 | 337500 | 1.5060 | 37.5413 |
| 0.7043 | 13.0 | 365625 | 1.5256 | 37.5118 |
| 0.6622 | 14.0 | 393750 | 1.5555 | 37.5092 |
| 0.6208 | 15.0 | 421875 | 1.5733 | 37.2924 |
| 0.5807 | 16.0 | 450000 | 1.6048 | 37.319 |
| 0.542 | 17.0 | 478125 | 1.6435 | 37.0629 |
| 0.5043 | 18.0 | 506250 | 1.6647 | 37.1334 |
| 0.4685 | 19.0 | 534375 | 1.7014 | 37.02 |
| 0.4352 | 20.0 | 562500 | 1.7300 | 36.9514 |
| 0.4031 | 21.0 | 590625 | 1.7572 | 36.9637 |
| 0.3731 | 22.0 | 618750 | 1.7902 | 36.9821 |
| 0.346 | 23.0 | 646875 | 1.8112 | 36.9586 |
| 0.3227 | 24.0 | 675000 | 1.8325 | 36.9286 |
| 0.303 | 25.0 | 703125 | 1.8430 | 36.9382 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
tbboukhari/Alpaca_instruction_fine_tune_French
|
tbboukhari
| 2023-03-19T22:31:57Z | 0 | 4 |
transformers
|
[
"transformers",
"Alpaca",
"Instruction-fine-tuning",
"NLP",
"Instruct Alpaca",
"PEFT",
"LoRA",
"fr",
"dataset:tbboukhari/Alpaca_french_instruct",
"endpoints_compatible",
"region:us"
] | null | 2023-03-19T19:25:47Z |
---
datasets:
- tbboukhari/Alpaca_french_instruct
language:
- fr
library_name: transformers
tags:
- Alpaca
- Instruction-fine-tuning
- NLP
- Instruct Alpaca
- PEFT
- LoRA
---
## How to use🦙:
```py
import torch
import bitsandbytes as bnb
from peft import PeftModel, PeftConfig, prepare_model_for_int8_training, LoraConfig, get_peft_model
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
peft_model_id = "tbboukhari/Alpaca_instruction_fine_tune_French"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = LlamaTokenizer.from_pretrained(config.base_model_name_or_path)
model = LlamaForCausalLM.from_pretrained(config.base_model_name_or_path,
load_in_8bit=True,
device_map="auto",)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
# Based on the inference code by `tloen/alpaca-lora`
def generate_prompt(instruction, entree=None):
if entree :
return f"""Vous trouverez ci-dessous des instructions décrivant une tâche, ainsi qu'une entrée qui fournit plus de contexte. Rédigez une réponse qui complète convenablement la demande.
### instructions:
{instruction}
### entrée:
{entree}
### sortie:"""
else:
return f"""Vous trouverez ci-dessous des instructions décrivant une tâche, ainsi qu'une entrée qui fournit plus de contexte. Rédigez une réponse qui complète convenablement la demande.
### instructions:
{instruction}
### sortie:"""
# Inputs to instantiate the model:
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
num_beams=4,
)
# Evaluate the model:
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("sortie:", output.split("### sortie:")[1].strip())
evaluate(input("instructions: "))
```
|
kejian/cpsc-wmle-1
|
kejian
| 2023-03-19T22:22:56Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-17T17:36:09Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
model-index:
- name: kejian/cpsc-wmle-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-wmle-1
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000 and the tomekkorbak/detoxify-pile-chunk3-1800000-1850000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 42724
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [42724],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [42724],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 2048},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'beta': 1, 'exponential': False, 'name': 'WeightedMLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-wmle-1',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0007,
'logging_first_step': True,
'logging_steps': 50,
'num_tokens': 2800000000.0,
'output_dir': 'training_output_1',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 21362,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1b71j55s
|
shannb/t5-small-finetuned-TEC-to-eng-two
|
shannb
| 2023-03-19T22:15:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-08T23:47:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-TEC-to-eng-two
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-TEC-to-eng-two
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0135
- Bleu: 47.4124
- Gen Len: 15.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 2 | 1.6435 | 29.1493 | 15.5208 |
| No log | 2.0 | 4 | 1.3090 | 33.8289 | 14.8542 |
| No log | 3.0 | 6 | 1.1451 | 39.7632 | 14.8542 |
| No log | 4.0 | 8 | 1.0720 | 42.4127 | 15.1458 |
| No log | 5.0 | 10 | 1.0381 | 46.3985 | 15.0625 |
| No log | 6.0 | 12 | 1.0210 | 46.9342 | 15.0625 |
| No log | 7.0 | 14 | 1.0135 | 47.4124 | 15.0625 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dominguesm/positive-reframing-ptbr
|
dominguesm
| 2023-03-19T22:10:55Z | 32 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"seq2seq",
"positive_perspectives",
"pt",
"dataset:dominguesm/positive-reframing-ptbr-dataset",
"arxiv:2204.02952",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-20T17:26:47Z |
---
language: pt
license: cc-by-4.0
tags:
- seq2seq
- t5
- positive_perspectives
datasets:
- dominguesm/positive-reframing-ptbr-dataset
widget:
- text: "['growth', 'neutralizing']: Sempre estressado e pensando em um monte de coisas ao mesmo tempo, preciso levar uma de cada vez, sobrecarga estressada, necessidade de reclamar"
- text: "['growth', 'neutralizing', 'optimism']: Se eu não tiver um colapso mental antes do final do verão, será um milagre."
- text: "['impermanence']: Dirigindo para visitar a vovó no hospital e o meu filho que está doente."
- text: "['optimism']: Ótimo agora, como vou explicar isso para ela, ela está tão perto de mim que não posso perdê-la :'("
- text: "['growth', 'optimism']: sempre há algo que eu poderia estar fazendo. Eu geralmente escolho não fazer isso."
---
# Positive Perspectives with Portuguese Text Reframing
## Model description
This model is a [PTT5](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) adjusted to the sentiment transfer task, where the objective is to reverse the sentiment polarity of a text without contradicting the original meaning. Positive reframing induces a complementary positive viewpoint (e.g. glass-half-full) escaping negative patterns. Based on the article [arXiv:2204.02952](https://arxiv.org/abs/2204.02952).
## How to use
The model uses one or more sentiment strategies concatenated with a sentence and will generate a sentence with the applied sentiment output. The maximum string length is 1024 tokens. Entries must be organized in the following format:
```
"['thankfulness', 'optimism']: Tenho tanta coisa para fazer antes de sair da cidade por uma semana no domingo."
```
### Available sentiment strategies:
**growth**: viewing a challenging event as an opportunity for the author to specifically grow or improve himself.
**impermanence**: Saying that bad things don't last forever, will get better soon, and/or that other people have had similar difficulties.
**neutralizing**: Replacing a negative word with a neutral word. For example, “This was a terrible day” becomes “This was a long day”.
**optimism**: Focusing on things about the situation itself, at that moment, that are good (not just predicting a better future).
**self_affirmation**: Talking about what strengths the author already has, or values he admires, such as love, courage, perseverance, etc.
**thankfulness**: Expressing gratitude or gratitude with keywords like appreciate, happy for it, grateful for, good thing, etc.
### Usage
```python
from transformers import pipeline
pipe = pipeline('summarization', "dominguesm/positive-reframing-ptbr")
text = "['thankfulness', 'optimism']: Tenho tanta coisa para fazer antes de sair da cidade por uma semana no domingo."
pipe(text, max_length=1024)
```
|
codeSpaghetti/poca-SoccerTwos
|
codeSpaghetti
| 2023-03-19T21:55:34Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-19T21:55:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: codeSpaghetti/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Zilikon/q-Taxi-v3
|
Zilikon
| 2023-03-19T21:55:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T21:55:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Soulaimene1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
josu/gpt-neo-br-instruction
|
josu
| 2023-03-19T21:54:05Z | 19 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-19T21:11:41Z |
---
language:
- pt
widget:
- text: Explique o que é inteligência artificial.
- text: Explique o que é processamento de linguagem natural.
---
``` python
from transformers import GenerationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("josu/gpt-neo-br-instruction")
tokenizer = AutoTokenizer.from_pretrained("josu/gpt-neo-br-instruction")
def generate_prompt(instruction, input=None):
if input:
return f"""Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### Instrução:
{instruction}
### Entrada:
{input}
### Resposta:"""
else:
return f"""Abaixo está uma instrução que descreve uma tarefa. Escreva uma resposta que complete adequadamente o pedido.
### Instrução:
{instruction}
### Resposta:"""
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
num_beams=4,
)
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
content = []
for s in generation_output.sequences:
output = tokenizer.decode(s)
content.append(output.split("### Resposta:")[1].strip())
return content
```
|
c0ldstudy/Taxi-v3
|
c0ldstudy
| 2023-03-19T21:08:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T21:08:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="c0ldstudy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mdoshi2612/fake-news-detector
|
mdoshi2612
| 2023-03-19T21:07:21Z | 0 | 0 | null |
[
"code",
"en",
"arxiv:1910.09700",
"region:us"
] | null | 2023-03-19T21:01:06Z |
---
language:
- en
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.