modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 12:31:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 12:30:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fidhal-ad/liar-deberta-v3-base_first_attempt
|
fidhal-ad
| 2023-07-03T10:37:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T15:43:51Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: liar-deberta-v3-base_first_attempt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# liar-deberta-v3-base_first_attempt
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6722
- Accuracy: 0.2705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7307 | 1.0 | 642 | 1.6722 | 0.2705 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jordimas/bloom-ctranslate2
|
jordimas
| 2023-07-03T10:37:16Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-28T15:02:40Z |
---
license: bigscience-bloom-rail-1.0
---
# Bloom CTranslate2's model
This is a collection of some of the [Bigscience Bloom](https://huggingface.co/bigscience/bloom) exported to
[CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This allows to load and usage these models
efficently on CPU or GPU.
## Models
The models have been converted to *float16* and can be load in with any other quantification method (e.g. *int 8*).
| Model name | Description |
| --- | --- |
| [bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 560M parameter model pretrained on ROOTS|
| [bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 3B parameter model pretrained on ROOTS
| [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 7.1B parameter model finetuned on xP3|
| [bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 7.1B parameter model finetuned on xP3mt |
| [mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt) | 13B parameter model finetuned on xP3|
See [directories](https://huggingface.co/jordimas/bloom-ctranslate2/tree/main) for the different models available.
## Simple code to use them
Install dependencies:
```shell
pip install huggingface_hub ctranslate2 transformers torch
```
Usage:
```python
import huggingface_hub
import ctranslate2
import transformers
model_name = "bloomz-7b1"
prompt = "Hello, I am Joan and I am from Barcelona and"
repo_id = "jordimas/bloom-ctranslate2"
snapshot_folder = huggingface_hub.snapshot_download(repo_id = repo_id, allow_patterns=f"*{model_name}*")
print(f"folder: {snapshot_folder}")
model = f"{snapshot_folder}/{model_name}"
generator = ctranslate2.Generator(model, compute_type="int8")
tokenizer = transformers.AutoTokenizer.from_pretrained(model)
start_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([start_tokens], max_length=90)
result = tokenizer.decode(results[0].sequences_ids[0])
print(f"Result: {result}")
```
|
T-Systems-onsite/cross-en-pl-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-07-03T10:33:55Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"pl",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- pl
license: mit
tags:
- sentence_embedding
---
|
T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-07-03T10:33:40Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- de
- fr
license: mit
tags:
- sentence_embedding
---
|
CogwiseAI/testchatexample
|
CogwiseAI
| 2023-07-03T10:30:57Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T02:20:40Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
|
ecwk/distilbert-git-commits-bugfix-classification
|
ecwk
| 2023-07-03T10:09:49Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T10:08:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-git-commits-bugfix-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-git-commits-bugfix-classification
This model is a fine-tuned version of [neuralsentry/distilbert-git-commits-mlm](https://huggingface.co/neuralsentry/distilbert-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5037
- Accuracy: 0.9231
- Precision: 0.85
- Recall: 1.0
- F1: 0.9189
- Roc Auc: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.6837 | 1.0 | 22 | 0.6040 | 0.5897 | 0.5161 | 0.9412 | 0.6667 | 0.6297 |
| 0.3852 | 2.0 | 44 | 0.2881 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.2148 | 3.0 | 66 | 0.3807 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0701 | 4.0 | 88 | 0.4934 | 0.8718 | 0.7727 | 1.0 | 0.8718 | 0.8864 |
| 0.0164 | 5.0 | 110 | 0.4892 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0039 | 6.0 | 132 | 0.4929 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0012 | 7.0 | 154 | 0.4065 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0008 | 8.0 | 176 | 0.4837 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0007 | 9.0 | 198 | 0.5000 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0006 | 10.0 | 220 | 0.5037 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Abhishek1609/my-pet-dog-sbs
|
Abhishek1609
| 2023-07-03T09:57:16Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T09:52:44Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-SBS Dreambooth model trained by Abhishek1609 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU243
Sample pictures of this concept:
.jpg)
|
daiwenbin/xlm-roberta-base-finetuned-panx-all
|
daiwenbin
| 2023-07-03T09:46:22Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-03T09:32:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2039
- F1: 0.8461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3133 | 1.0 | 715 | 0.2104 | 0.8000 |
| 0.1362 | 2.0 | 1430 | 0.2030 | 0.8329 |
| 0.0753 | 3.0 | 2145 | 0.2039 | 0.8461 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
Kirili4ik/mbart_ruDialogSum
|
Kirili4ik
| 2023-07-03T09:45:51Z | 338 | 25 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"ru",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- mbart
inference:
parameters:
no_repeat_ngram_size: 4,
num_beams: 5
datasets:
- IlyaGusev/gazeta
- samsum
- samsum_(translated_into_Russian)
widget:
- text: >
Джефф: Могу ли я обучить модель 🤗 Transformers на Amazon SageMaker?
Филипп: Конечно, вы можете использовать новый контейнер для глубокого
обучения HuggingFace.
Джефф: Хорошо.
Джефф: и как я могу начать?
Джефф: где я могу найти документацию?
Филипп: ок, ок, здесь можно найти все:
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
model-index:
- name: mbart_ruDialogSum
results:
- task:
name: Abstractive Dialogue Summarization
type: abstractive-text-summarization
dataset:
name: SAMSum Corpus (translated to Russian)
type: samsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 34.5
- name: Validation ROGUE-L
type: rogue-l
value: 33
- name: Test ROGUE-1
type: rogue-1
value: 31
- name: Test ROGUE-L
type: rogue-l
value: 28
license: cc
---
### 📝 Description
MBart for Russian summarization fine-tuned for **dialogues** summarization.
This model was firstly fine-tuned by [Ilya Gusev](https://hf.co/IlyaGusev) on [Gazeta dataset](https://huggingface.co/datasets/IlyaGusev/gazeta). We have **fine tuned** that model on [SamSum dataset](https://huggingface.co/datasets/samsum) **translated to Russian** using GoogleTranslateAPI
🤗 Moreover! We have implemented a **! telegram bot [@summarization_bot](https://t.me/summarization_bot) !** with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages! 🤗
### ❓ How to use with code
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
# Download model and tokenizer
model_name = "Kirili4ik/mbart_ruDialogSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)
model.eval()
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
top_k=0,
num_beams=3,
no_repeat_ngram_size=3
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
|
aronmal/Taxi-v3-Qtable
|
aronmal
| 2023-07-03T09:39:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T09:39:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Qtable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aronmal/Taxi-v3-Qtable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aronmal/q-FrozenLake-v1-4x4-noSlippery
|
aronmal
| 2023-07-03T09:37:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T09:37:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aronmal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SRDdev/ScriptForge_Plus
|
SRDdev
| 2023-07-03T09:36:48Z | 132 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T05:25:32Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
widget:
- text: 10 Meditation Tips
example_title: Health Example
- text: Cooking red sauce pasta
example_title: Cooking Example
- text: Introduction to Keras
example_title: Technology Example
Tags:
- text-generation
metrics:
- accuracy
---
# ScriptForge_Plus
## 🖊️ Model description
ScriptForge_Plus is a language model trained on a dataset of 5000 YouTube videos that cover different domains of AI.
ScriptForge_Plus is a Causal language transformer. The model resembles the GPT2 architecture, the model is a Causal Language model meaning it predicts the probability of a sequence of words based on the preceding words in the sequence.
It generates a probability distribution over the next word given the previous words, without incorporating future words.
The goal of ScriptForge_Plus is to generate scripts for Youtube videos that are coherent, informative, and engaging.
This can be useful for content creators who are looking for inspiration or who want to automate the process of generating video scripts.
To use ScriptForge_Plus, users can provide a prompt or a starting sentence, and the model will generate a sequence of words that follow the context and style of the training data.
Models
- [ScriptForge_Plus](https://huggingface.co/SRDdev/ScriptForge_Plus) : AI content Model
- [ScriptForge-small](https://huggingface.co/SRDdev/ScriptForge-medium) : Generalized Content Model
More models are coming soon...
## 🛒 Intended uses
The intended uses of ScriptForge_Plus include generating scripts for videos, providing inspiration for content creators, and automating the process of generating video scripts.
## 📝 How to use
You can use this model directly with a pipeline for text generation.
1. __Load Model__
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptForge_Plus")
model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptForge_Plus")
```
2. __Pipeline__
```python
from transformers import pipeline
generator = pipeline('text-generation', model= model , tokenizer=tokenizer)
context = "What is Deep Learning"
length_to_generate = 250
script = generator(context, max_length=length_to_generate, do_sample=True)[0]['generated_text']
script
```
<p style="opacity: 0.8">The model may generate random information as it is still in beta version</p>
## 🎈Limitations and bias
> The model is trained on Youtube Scripts and will work better for that. It may also generate random information and users should be aware of that and cross-validate the results.
## Citations
```
@model{
Name=Shreyas Dixit
framework=Pytorch
Year=Jan 2023
Pipeline=text-generation
Github=https://github.com/SRDdev
LinkedIn=https://www.linkedin.com/in/srddev
}
```
|
OriginF/output
|
OriginF
| 2023-07-03T09:34:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-20T08:28:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks lego
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - OriginF/output
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks lego using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
daiwenbin/xlm-roberta-base-finetuned-panx-de-fr
|
daiwenbin
| 2023-07-03T09:28:37Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-03T09:18:25Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- F1: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.36 | 1.0 | 715 | 0.2279 | 0.8163 |
| 0.1862 | 2.0 | 1430 | 0.1997 | 0.8363 |
| 0.1169 | 3.0 | 2145 | 0.2083 | 0.8465 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
joserodr68/Qtable_taxi_ja
|
joserodr68
| 2023-07-03T09:15:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T09:15:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Qtable_taxi_ja
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.63
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="joserodr68/Qtable_taxi_ja", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shularp/Helsinki_mul-en_test
|
Shularp
| 2023-07-03T09:11:46Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T07:42:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: TestHelsinkiJpEn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestHelsinkiJpEn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7084 | 1.0 | 2423 | 1.0513 |
| 0.8524 | 2.0 | 4846 | 1.0528 |
| 0.7534 | 3.0 | 7269 | 1.0740 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NancyAthghara23/red-panda-rpd
|
NancyAthghara23
| 2023-07-03T08:55:34Z | 10 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T08:52:05Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Red-Panda-rpd Dreambooth model trained by NancyAthghara23 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU151
Sample pictures of this concept:


|
ak2704/ppo-LunarLander-v2
|
ak2704
| 2023-07-03T08:47:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T08:43:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.42 +/- 19.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
y59/puffy-copyeasy
|
y59
| 2023-07-03T08:35:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T08:29:37Z |
---
license: creativeml-openrail-m
---
|
Soojeong/female_hanbok_1e-7_ckpt_icb
|
Soojeong
| 2023-07-03T08:32:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T06:33:25Z |
---
license: creativeml-openrail-m
base_model: model/chilloutmix_NiPrunedFp16Fix
instance_prompt: a photo of wearing hanbok
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Soojeong/female_hanbok_1e-7_ckpt_icb
This is a dreambooth model derived from model/chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
vlkn/bloom3b_instruct
|
vlkn
| 2023-07-03T08:24:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-03T08:12:44Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom3b_instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom3b_instruct
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hoanghoavienvo/roberta-base-Dep
|
hoanghoavienvo
| 2023-07-03T08:13:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T06:42:10Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-Dep
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4653
- Accuracy: 0.8333
- F1: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 235 | 0.4592 | 0.8217 | 0.8911 |
| No log | 2.0 | 470 | 0.4116 | 0.845 | 0.9086 |
| 0.2907 | 3.0 | 705 | 0.4892 | 0.8133 | 0.8845 |
| 0.2907 | 4.0 | 940 | 0.4532 | 0.835 | 0.9011 |
| 0.2694 | 5.0 | 1175 | 0.4653 | 0.8333 | 0.8992 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jantz/IU-RVC_V2-300_Epochs
|
jantz
| 2023-07-03T08:12:30Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-03T02:07:14Z |
- Dataset: 1 hour of IU songs.
- Vocal Separation: UVR5 model was used to separate vocals. The process involved: Kim Vocal 1 -> Reverb HQ -> Karaoke 2.
- Additional Processing: Noise gate and manual touch-ups were performed in Audacity.
|
boonnam/ppo-LunarLander-v2
|
boonnam
| 2023-07-03T08:12:02Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-02T09:22:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.56 +/- 17.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Senaa/My-Mix
|
Senaa
| 2023-07-03T07:54:32Z | 0 | 4 | null |
[
"region:us"
] | null | 2023-06-08T23:26:28Z |
**For best result, please use this prompts**
+ prompts: (8k, best quality, masterpiece, ultra highres:1.2),
+ negative prompts: EasyNegative, bad-hands-5, (worst quality, low quality:1.8), low contrast, monochrome, text, watermark, lowres, blurry, blurred, freckles, (necklace, earrings:1.2), (ribs, abs, muscular:1.2)
**Note:**
You also can lowering (worst quality, low quality:1.8) emphasis like to (1.6 or 1.4), if you found kinda dry skin and/or this prompts is too strong.
**Sample images**
Using (worst quality, low quality:2.0)

Using (worst quality, low quality:1.6)

Various (worst quality, low quality)





|
manmyung/q-FrozenLake-v1-4x4-noSlippery
|
manmyung
| 2023-07-03T07:53:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T07:53:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="manmyung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
anirbankgec/my_awesome_qa_model
|
anirbankgec
| 2023-07-03T07:53:29Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-30T05:20:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.1636 |
| 2.6442 | 2.0 | 500 | 1.6647 |
| 2.6442 | 3.0 | 750 | 1.5982 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
daiwenbin/xlm-roberta-base-finetuned-panx-de
|
daiwenbin
| 2023-07-03T07:50:20Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-03T07:35:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8327865206027916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1947
- F1: 0.8328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3446 | 1.0 | 525 | 0.2154 | 0.8031 |
| 0.1782 | 2.0 | 1050 | 0.2004 | 0.8228 |
| 0.1128 | 3.0 | 1575 | 0.1947 | 0.8328 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
somendas17/my-pet-cat-meow
|
somendas17
| 2023-07-03T07:48:42Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T07:45:17Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-meow Dreambooth model trained by somendas17 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU541
Sample pictures of this concept:

|
kutnus/cem-karaca
|
kutnus
| 2023-07-03T07:47:30Z | 0 | 0 | null |
[
"tr",
"doi:10.57967/hf/0840",
"license:openrail",
"region:us"
] | null | 2023-07-03T07:45:16Z |
---
license: openrail
language:
- tr
---
|
heka-ai/e5-90k
|
heka-ai
| 2023-07-03T07:31:44Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-03T07:31:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# heka-ai/e5-90k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/e5-90k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/e5-90k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
vladkolev/distilroberta-base-finetuned-emotion
|
vladkolev
| 2023-07-03T07:27:32Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-21T08:29:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilroberta-base-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3438
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.615 | 1.0 | 748 | 0.2832 | 0.9004 |
| 0.2716 | 2.0 | 1496 | 0.2632 | 0.9044 |
| 0.1929 | 3.0 | 2244 | 0.3124 | 0.9071 |
| 0.1559 | 4.0 | 2992 | 0.3258 | 0.8971 |
| 0.1185 | 5.0 | 3740 | 0.3438 | 0.9004 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
Bugsys0302/merucbslor
|
Bugsys0302
| 2023-07-03T07:24:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T07:16:11Z |
---
license: creativeml-openrail-m
---
|
vlkn/bloom1b_instruct
|
vlkn
| 2023-07-03T07:18:55Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-03T07:15:45Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom1b_instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom1b_instruct
This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jumtra/calm-7b-tune-ep4
|
Jumtra
| 2023-07-03T07:09:11Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:kunishou/cnn-dailymail-27k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-25T09:01:35Z |
---
license: cc-by-sa-4.0
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- kunishou/cnn-dailymail-27k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# open-calm-7b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
cc-by-sa-4.0
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
```
|
NasimB/gpt2-cl-rarity-sampling-5
|
NasimB
| 2023-07-03T07:01:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T04:30:07Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cl-rarity-sampling-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cl-rarity-sampling-5
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6015 | 0.05 | 500 | 5.8621 |
| 5.3617 | 0.11 | 1000 | 5.4637 |
| 5.0237 | 0.16 | 1500 | 5.2314 |
| 4.8011 | 0.22 | 2000 | 5.0828 |
| 4.6311 | 0.27 | 2500 | 4.9993 |
| 4.504 | 0.33 | 3000 | 4.9326 |
| 4.3948 | 0.38 | 3500 | 4.8809 |
| 4.2939 | 0.44 | 4000 | 4.8421 |
| 4.2022 | 0.49 | 4500 | 4.8057 |
| 4.1111 | 0.55 | 5000 | 4.7772 |
| 4.0184 | 0.6 | 5500 | 4.7492 |
| 3.9458 | 0.66 | 6000 | 4.7347 |
| 3.8712 | 0.71 | 6500 | 4.7195 |
| 3.8079 | 0.77 | 7000 | 4.7051 |
| 3.7575 | 0.82 | 7500 | 4.6946 |
| 3.716 | 0.88 | 8000 | 4.6904 |
| 3.6978 | 0.93 | 8500 | 4.6861 |
| 3.6899 | 0.99 | 9000 | 4.6848 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SnorreStorjord/whisper-large-no
|
SnorreStorjord
| 2023-07-03T06:55:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"no",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T11:44:47Z |
---
language:
- no
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- NbAiLab/NPSC
model-index:
- name: Whisper Large NO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large no WIP
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the NbAiLab/NPSC dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nolanaatama/phngyfrmfvnghtstfrddysrvcv2300pchnlgspdrwb
|
nolanaatama
| 2023-07-03T06:51:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T06:37:26Z |
---
license: creativeml-openrail-m
---
|
Manab/donut-base-my_model_1_new_1
|
Manab
| 2023-07-03T06:27:04Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-03T05:57:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-my_model_1_new_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-my_model_1_new_1
This model is a fine-tuned version of [Manab/donut-base-my_model_1_new_1](https://huggingface.co/Manab/donut-base-my_model_1_new_1) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.8017
- eval_runtime: 42.3382
- eval_samples_per_second: 0.685
- eval_steps_per_second: 0.094
- epoch: 0.96
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RajkNakka/q-FrozenLake-v1-4x4-noSlippery
|
RajkNakka
| 2023-07-03T06:22:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T06:20:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RajkNakka/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ezell/testModel
|
Ezell
| 2023-07-03T06:00:06Z | 0 | 0 |
bertopic
|
[
"bertopic",
"music",
"table-question-answering",
"af",
"dataset:GAIR/lima",
"license:bigcode-openrail-m",
"region:us"
] |
table-question-answering
| 2023-07-03T05:59:27Z |
---
license: bigcode-openrail-m
datasets:
- GAIR/lima
language:
- af
metrics:
- bleurt
library_name: bertopic
pipeline_tag: table-question-answering
tags:
- music
---
|
vinthony/SadTalker-V002rc
|
vinthony
| 2023-07-03T05:52:19Z | 0 | 17 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-03T05:34:03Z |
---
license: mit
---
The new released model of https://github.com/OpenTalker/SadTalker.
The file of https://huggingface.co/vinthony/SadTalker-V002rc/blob/main/epoch_00190_iteration_000400000_checkpoint.pt comes from https://github.com/RenYurui/PIRender.
Thanks for their wonderful work!
|
Htar/ppo-LunarLander-v2
|
Htar
| 2023-07-03T05:38:39Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T05:36:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.37 +/- 12.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
google/umt5-base
|
google
| 2023-07-03T05:37:52Z | 1,831 | 13 |
transformers
|
[
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-02T01:49:59Z |
---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
|
google/umt5-xxl
|
google
| 2023-07-03T05:37:17Z | 286 | 19 |
transformers
|
[
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-02T02:15:00Z |
---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
|
sid/a2c-AntBulletEnv-v0
|
sid
| 2023-07-03T05:32:22Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-02T14:57:39Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1394.91 +/- 118.72
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
emya/outputs
|
emya
| 2023-07-03T05:29:25Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-02T22:00:13Z |
---
license: creativeml-openrail-m
base_model: outputs
instance_prompt: a logo of a service, named Mcdonald
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - emya/outputs
This is a dreambooth model derived from outputs. The weights were trained on a logo of a service, named Mcdonald using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Tiru8055/ppo-SnowballTarget
|
Tiru8055
| 2023-07-03T05:28:11Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-03T05:12:32Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Tiru8055/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Valent2809/classifier-model
|
Valent2809
| 2023-07-03T05:15:49Z | 26 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T03:27:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Valent2809/classifier-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Valent2809/classifier-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3103
- Validation Loss: 0.4343
- Train Accuracy: 0.8478
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6413 | 0.5342 | 0.7826 | 0 |
| 0.4819 | 0.4865 | 0.8043 | 1 |
| 0.3806 | 0.4798 | 0.7826 | 2 |
| 0.3400 | 0.4362 | 0.8261 | 3 |
| 0.3009 | 0.4343 | 0.8478 | 4 |
| 0.3103 | 0.4343 | 0.8478 | 5 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
xzuyn/GPT-2-XL-1.5B-GGML
|
xzuyn
| 2023-07-03T05:00:04Z | 0 | 1 | null |
[
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T04:05:46Z |
---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/gpt2-xl
|
gautam1989/mt5-small-finetuned-amazon-en-es
|
gautam1989
| 2023-07-03T04:54:53Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T04:00:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gautam1989/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gautam1989/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mT5-small](https://huggingface.co/google/mT5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2895
- Validation Loss: 3.3954
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6709 | 4.4471 | 0 |
| 5.9597 | 3.7763 | 1 |
| 5.1538 | 3.6068 | 2 |
| 4.7554 | 3.5175 | 3 |
| 4.4603 | 3.4380 | 4 |
| 4.2895 | 3.3954 | 5 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
|
chriskim2273
| 2023-07-03T04:50:05Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-03T04:13:01Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 45 | 0.5443 |
| No log | 2.0 | 90 | 0.6332 |
| No log | 3.0 | 135 | 0.6942 |
| No log | 4.0 | 180 | 0.6725 |
| No log | 5.0 | 225 | 0.7219 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aeala/Enterredaas-65b-QLoRA
|
Aeala
| 2023-07-03T04:34:35Z | 0 | 4 | null |
[
"region:us"
] | null | 2023-07-03T04:07:43Z |
## LoRA Info:
Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is paused for now. Feel free to try it!~
**Important Note**: This was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt)
### Input:
<prompt> (without the <>)
### Response:
```
Current upload: *possibly* final checkpoint
## Benchmarks
**wikitext2:** Coming soon...
**ptb-new:** Coming soon...
**c4-new:** Coming soon...
|
hopkins/mbart-finetuned-eng-kor-49
|
hopkins
| 2023-07-03T04:25:49Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T04:12:15Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9908
- Bleu: 7.2223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-50
|
hopkins
| 2023-07-03T04:24:57Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T04:06:46Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6559
- Bleu: 21.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aaa950739/trained_model
|
aaa950739
| 2023-07-03T04:16:04Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-03T03:56:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
deepghs/imgutils-models
|
deepghs
| 2023-07-03T04:12:18Z | 0 | 6 | null |
[
"onnx",
"dataset:deepghs/chafen_arknights",
"dataset:deepghs/monochrome_danbooru",
"license:mit",
"region:us"
] | null | 2023-03-11T08:37:38Z |
---
license: mit
datasets:
- deepghs/chafen_arknights
- deepghs/monochrome_danbooru
metrics:
- accuracy
---
# imgutils-models
This repository includes all the models in [deepghs/imgutils](https://github.com/deepghs/imgutils).
## LPIPS
This model is used for clustering anime images (named `差分` in Chinese), based on [richzhang/PerceptualSimilarity](https://github.com/richzhang/PerceptualSimilarity), trained with dataset [deepghs/chafen_arknights(private)](https://huggingface.co/datasets/deepghs/chafen_arknights).
When threshold is `0.45`, the [adjusted rand score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) can reach `0.995`.
File lists:
* `lpips_diff.onnx`, feature difference.
* `lpips_feature.onnx`, feature extracting.
## Monochrome
These model is used for monochrome image classification, based on CNNs and Transformers, trained with dataset [deepghs/monochrome_danbooru(private)](https://huggingface.co/datasets/deepghs/monochrome_danbooru).
The following are the checkpoints that have been formally put into use, all based on the Caformer architecture:
| Checkpoint | Algorithm | Safe Level | Accuracy | False Negative | False Positive |
|:----------------------------:|:---------:|:----------:|:----------:|:--------------:|:--------------:|
| monochrome-caformer-40 | caformer | 0 | 96.41% | 2.69% | 0.89% |
| **monochrome-caformer-110** | caformer | 0 | **96.97%** | 1.57% | 1.46% |
| monochrome-caformer_safe2-80 | caformer | 2 | 94.84% | **1.12%** | 4.03% |
| monochrome-caformer_safe4-70 | caformer | 4 | 94.28% | **0.67%** | 5.04% |
**`monochrome-caformer-110` has the best overall accuracy** among them, but considering that this model is often used to screen out monochrome images
and we want to screen out as many as possible without omission, we have also introduced weighted models (`safe2` and `safe4`).
Although their overall accuracy has been slightly reduced, the probability of False Negative (misidentifying a monochrome image as a colored one) is lower,
making them more suitable for batch screening.
## Deepdanbooru
`deepdanbooru` is a model used to tag anime images. Here, we provide a table for tag classification called `deepdanbooru_tags.csv`,
as well as an ONNX model (from [chinoll/deepdanbooru](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags)).
It's worth noting that due to the poor quality of the deepdanbooru model itself and the relatively old dataset,
it is only for testing purposes and is not recommended to be used as the main classification model. We recommend using the `wd14` model instead, see:
* https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags
|
hopkins/mbart-finetuned-eng-ind-49
|
hopkins
| 2023-07-03T04:11:46Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T03:53:54Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7653
- Bleu: 22.0600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
anurag629/ppo-LunarLander-v2
|
anurag629
| 2023-07-03T04:08:06Z | 1 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T04:07:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.10 +/- 12.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hopkins/mbart-finetuned-eng-ind-47
|
hopkins
| 2023-07-03T03:59:13Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T03:41:18Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7657
- Bleu: 21.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-45
|
hopkins
| 2023-07-03T03:34:35Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T03:16:54Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-45
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9907
- Bleu: 7.0592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-46
|
hopkins
| 2023-07-03T03:33:45Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T03:15:41Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-46
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6533
- Bleu: 20.8950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-43
|
hopkins
| 2023-07-03T03:22:08Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T03:08:49Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-43
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9892
- Bleu: 6.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-42
|
hopkins
| 2023-07-03T03:15:12Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:57:33Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9879
- Bleu: 6.7656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-44
|
hopkins
| 2023-07-03T03:14:24Z | 67 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:56:32Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7625
- Bleu: 21.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rodrigoclira/ppo-CartPole-v1
|
rodrigoclira
| 2023-07-03T03:13:55Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T03:13:47Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 145.30 +/- 62.61
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': 512
'repo_id': 'rodrigoclira/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
hopkins/mbart-finetuned-eng-ind-43
|
hopkins
| 2023-07-03T03:08:20Z | 70 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:50:25Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-43
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7586
- Bleu: 22.1541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sourabh2/Cartpole-v2
|
Sourabh2
| 2023-07-03T03:03:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T03:02:25Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
|
AshtakaOOf/ssambatea-locon
|
AshtakaOOf
| 2023-07-03T02:58:58Z | 0 | 1 | null |
[
"Text-to-Image",
"anime",
"lora",
"locon",
"lycoris",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-03T01:36:57Z |
---
license: cc-by-nc-sa-4.0
tags:
- Text-to-Image
- anime
- lora
- locon
- lycoris
---
# SSAMBAtea Style LoCon

## token: **ssambatea**
Trained on SSAMBAtea artwork
This is a LoCon and require the LyCORIS extension to work
I am planning on making a new improved dataset to do a V2
# License
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
hopkins/mbart-finetuned-eng-deu-45
|
hopkins
| 2023-07-03T02:57:51Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:39:34Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-45
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6514
- Bleu: 20.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alibaba-pai/pai-diffusion-artist-large-zh
|
alibaba-pai
| 2023-07-03T02:56:37Z | 14 | 7 |
diffusers
|
[
"diffusers",
"pytorch",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-03T09:38:48Z |
---
license: apache-2.0
tags:
- pytorch
- diffusers
- text-to-image
---
# Chinese Diffusion Model (Artist, 512 Resolution)
## 简介 Brief Introduction
我们开源了一个中文 Diffusion 模型,您可以直接输入中文提示词,我们为您呈现精美的艺术风格图片。本模型的默认分辨率是 512*512。
We release a Chinese diffusion model, which is able to generate high-quality artistic images according to the prompts you input. The default resolution of this model is 512*512.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
本模型支持 `diffusers`,可以参考以下范例代码:
This model supports `diffusers`. Please refer to the following code:
```python
from diffusers import StableDiffusionPipeline
model_id = "alibaba-pai/pai-diffusion-artist-large-zh"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
prompt = "雾蒙蒙的日出在湖面上"
image = pipe(prompt).images[0]
image.save("result.png")
```
## 作品展示 Gallery
| prompt: 浮岛,天空,白云,城堡,幻想世界 | prompt: 红白玫瑰花,很多花瓣,绽放 |
| ---------------------------------------- | ---------------------------------- |
| negative_prompt: 油画,模糊,雾蒙蒙 | negative_prompt: |
|  |  |
| prompt: 亭台楼阁,曲径通幽,水墨绘画,中国风 | prompt: 阳光,彩虹,小白马 |
| -------------------------------------------- | -------------------------- |
| negative_prompt: 油画,彩色 | negative_prompt: |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
alibaba-pai/pai-diffusion-artist-xlarge-zh
|
alibaba-pai
| 2023-07-03T02:55:08Z | 14 | 5 |
diffusers
|
[
"diffusers",
"pytorch",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-19T06:07:13Z |
---
license: apache-2.0
tags:
- pytorch
- diffusers
- text-to-image
---
# Chinese Diffusion Model (Artist, 768 Resolution)
## 简介 Brief Introduction
我们开源了一个中文 Diffusion 模型,您可以直接输入中文提示词,我们为您呈现精美的艺术风格图片。本模型的默认分辨率是 768*768。
We release a Chinese diffusion model, which is able to generate high-quality artistic images according to the prompts you input. The default resolution of this model is 768*768.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
本模型支持 `diffusers`,可以参考以下范例代码:
This model supports `diffusers`. Please refer to the following code:
```python
from diffusers import StableDiffusionPipeline
model_id = "alibaba-pai/pai-diffusion-artist-xlarge-zh"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
prompt = "雾蒙蒙的日出在湖面上"
image = pipe(prompt).images[0]
image.save("result.png")
```
## 作品展示 Gallery
| prompt: 火车,雪地,隧道,昏暗灯光,恐怖片 | prompt: 海底世界,各种鱼类,海草 |
| ------------------------------------------ | -------------------------------- |
| negative_prompt: 多个铁轨 | negative_prompt: |
|  |  |
| prompt: 粉红色羽毛的猫头鹰 | prompt: 带墨镜的猫,黑客,特工,西装 |
| -------------------------- | ------------------------------------ |
| negative_prompt: | negative_prompt: |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
digiplay/Landscape_PhotoReal_v1
|
digiplay
| 2023-07-03T02:53:33Z | 620 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T02:20:00Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/71987/landscapephotoreal?modelVersionId=76750
Sample images and prompt :
magnificent scenery, wide landscape, sharp and crisp background, very beautiful landscape, old ruins buildings, fantasy, birdview, best quality, masterpiece, ultra high res, dark blue light, cloudy, photo, photorealistic, wide view, kkw-ph1


photorealistic modern living room, sharp and crisp background, sofa, low table, bookshelf, parks and buildings from window, wood and flower, beautiful landscape, best quality, masterpiece, hires, in the morning light, detailed lighting, blue sky, (((photo))), (((photorealistic))) ,kkw-ph1, wide shot, web meeting background

|
hopkins/mbart-finetuned-eng-deu-42
|
hopkins
| 2023-07-03T02:38:45Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:24:45Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6513
- Bleu: 20.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-40
|
hopkins
| 2023-07-03T02:37:25Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:19:49Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9919
- Bleu: 7.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rasith/NZappFineTune2
|
Rasith
| 2023-07-03T02:31:27Z | 31 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T02:31:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: NZappFineTune2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NZappFineTune2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-40
|
hopkins
| 2023-07-03T02:19:21Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T02:01:28Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7628
- Bleu: 21.8914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AhmedTaha012/gptneo-TxtToJson-v0.1.16
|
AhmedTaha012
| 2023-07-03T02:16:00Z | 79 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T01:43:59Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gptneo-TxtToJson-v0.1.16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptneo-TxtToJson-v0.1.16
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 88 | 0.6397 |
| No log | 2.0 | 176 | 0.5158 |
| No log | 3.0 | 264 | 0.4083 |
| No log | 4.0 | 352 | 0.2929 |
| No log | 5.0 | 440 | 0.2384 |
| 0.3687 | 6.0 | 528 | 0.1904 |
| 0.3687 | 7.0 | 616 | 0.1638 |
| 0.3687 | 8.0 | 704 | 0.1485 |
| 0.3687 | 9.0 | 792 | 0.1405 |
| 0.3687 | 10.0 | 880 | 0.1277 |
| 0.3687 | 11.0 | 968 | 0.1232 |
| 0.0629 | 12.0 | 1056 | 0.1291 |
| 0.0629 | 13.0 | 1144 | 0.1159 |
| 0.0629 | 14.0 | 1232 | 0.1123 |
| 0.0629 | 15.0 | 1320 | 0.1160 |
| 0.0629 | 16.0 | 1408 | 0.1159 |
| 0.0629 | 17.0 | 1496 | 0.1195 |
| 0.0137 | 18.0 | 1584 | 0.1186 |
| 0.0137 | 19.0 | 1672 | 0.1179 |
| 0.0137 | 20.0 | 1760 | 0.1180 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
stephansf/taxi
|
stephansf
| 2023-07-03T02:15:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T02:15:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="stephansf/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/mbart-finetuned-eng-ind-39
|
hopkins
| 2023-07-03T02:13:00Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:55:07Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7633
- Bleu: 21.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-38
|
hopkins
| 2023-07-03T02:06:04Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:52:19Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7718
- Bleu: 21.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ngadou/falcon7b-scam-detector
|
Ngadou
| 2023-07-03T02:03:13Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-generation",
"en",
"fr",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-03T01:49:44Z |
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
language:
- en
- fr
metrics:
- accuracy
pipeline_tag: text-generation
library_name: adapter-transformers
---
|
hopkins/mbart-finetuned-eng-deu-40
|
hopkins
| 2023-07-03T02:00:58Z | 70 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:42:43Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6497
- Bleu: 20.8437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Avivi210/bramodel
|
Avivi210
| 2023-07-03T02:00:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T01:34:54Z |
---
license: creativeml-openrail-m
---
|
ankitvyas/myBloomLoraModel
|
ankitvyas
| 2023-07-03T01:31:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T01:19:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/mbart-finetuned-eng-ind-37
|
hopkins
| 2023-07-03T01:30:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:12:28Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7649
- Bleu: 21.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-36
|
hopkins
| 2023-07-03T01:24:08Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:06:22Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-36
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7650
- Bleu: 21.8172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-35
|
hopkins
| 2023-07-03T01:17:55Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T01:00:12Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-35
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7681
- Bleu: 21.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-34
|
hopkins
| 2023-07-03T01:15:25Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T00:57:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7610
- Bleu: 21.9140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-37
|
hopkins
| 2023-07-03T01:11:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T00:53:43Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6509
- Bleu: 20.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-36
|
hopkins
| 2023-07-03T01:05:54Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T00:47:41Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-36
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6485
- Bleu: 20.7366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
subrinkase/teste
|
subrinkase
| 2023-07-03T00:59:55Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-03T00:59:55Z |
---
license: bigscience-bloom-rail-1.0
---
|
hopkins/mbart-finetuned-eng-kor-30
|
hopkins
| 2023-07-03T00:42:37Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T00:29:16Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-30
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9943
- Bleu: 7.0556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MonkDalma/distilbert-base-uncased-distilled-clinc
|
MonkDalma
| 2023-07-03T00:39:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T00:28:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9480645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1888
- Accuracy: 0.9481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6456 | 1.0 | 318 | 1.1354 | 0.7487 |
| 0.8815 | 2.0 | 636 | 0.5794 | 0.8777 |
| 0.4748 | 3.0 | 954 | 0.3384 | 0.9213 |
| 0.2955 | 4.0 | 1272 | 0.2485 | 0.9355 |
| 0.2211 | 5.0 | 1590 | 0.2176 | 0.9413 |
| 0.1907 | 6.0 | 1908 | 0.2032 | 0.9471 |
| 0.1743 | 7.0 | 2226 | 0.1953 | 0.9487 |
| 0.1653 | 8.0 | 2544 | 0.1918 | 0.9477 |
| 0.1601 | 9.0 | 2862 | 0.1897 | 0.9484 |
| 0.1582 | 10.0 | 3180 | 0.1888 | 0.9481 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TalesLF/Reinforce-Pixelcopter-PLE-v0
|
TalesLF
| 2023-07-03T00:34:21Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T00:34:16Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.60 +/- 21.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FPHam/Rachel_Assistant_Editor_13b_GPTQ
|
FPHam
| 2023-07-03T00:34:14Z | 6 | 5 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"LLM",
"writing",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-02T23:27:48Z |
---
tags:
- LLM
- llama
- writing
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/eNZ7P8D.png" alt="FPHam's Harper" style="width: 50%; min-width: 300px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Rachel Ko-fi (I'll give her a sip)</a></p>
</div>
<!-- header end -->
## Rachel is an assistant editor for your fiction. (v.2)
This is a retraining of [Harper Assistant Editor](https://huggingface.co/FPHam/Harper_AssistantEditor_V1_13b_GPTQ) (who has been fired - she was just too young and inexperienced to do this job properly)
Meet Rachel. She's a senior Assistant Editor at the famous FP Ham publishing company.
**IMPORTANT NOTE: Rachel uses ALPACA instruct**
If you want to get in touch with Rachel, a decent approach is to ask her to
```
### Instruction: Rewrite the following text: <the text>
### Response:
```
Text should be a short paragraph at a time, few sentences. If you write long paragraphs, Rachel might feel the urge to shorten them.
You see, her attention span is like that of a goldfish.
You can give her specific instructions like:
```
Rewrite the following text using aristocratic British English,
also known as 'Receiver Pronunciation' (RP) or 'BBC English':
```
Oh, and just a heads up, if you ask her to use profanity, well, the outcome is all on you because she is very eager to comply.
Don't be surprised if Rachel jumps at the chance to correct your words, even if you thought they were perfectly fine.
She's just that eager to make her mark, you know?
## Examples
Here are some examples. The format was as just as above, an instruction to rewrite text follwing with the input text itself.
The input text was (of course):
```
It is a truth universally acknowledged, that a single man in possession
of a good fortune must be in want of a wife.
However little known the feelings or views of such a man may be on his
first entering a neighbourhood, this truth is so well fixed in the minds
of the surrounding families, that he is considered as the rightful property
of some one or other of their daughters.
```
Rewrite the following text in a modern, easy-to-read language
```
Everyone knows that if a guy has money and isn't married,
people will think he needs to find a wife.
So it's assumed that whoever owns all that
cash should hook up with someone from the neighborhood.
```
Rewrite the following text the way Zapp Brannigan from Futurama talks
```
Y'know, it's a fact, sonny, that any fella with a bit o' money to spare
has gotta find himself a missus.
They all know the score - ain't no secret that when a new face rolls into town,
there's gonna be some lucky lady claimin' him for her own.
Ain't nothin' like it in their heads, mind you, but they don't need no fancy
talkers to figure out who gets dibs on 'em.
```
Rewrite the following text reversing the genders
```
It is a truth universally acknowledged, that a single woman in
possession of a good fortune must be in want of a husband.
However little known the feelings or views of such a woman may
be on her first entering a neighbourhood, this truth is so well fixed
in the minds of the surrounding families, that she is considered as
the rightful property of some one or other of their sons.
```
Rewrite the following text in Czech
```
Je pravda, kterou uznávají všichni, že jednotlivec vlastnící nějaké bohatství musí
potřebovat ženu.
Ačkoli pocity nebo názory takového muže mohou být málo známé okolním rodinám,
když se poprvé setká s blízkým okolím, je tato pravda tak pevně zakořeněná
ve mysli sousedních rodin, že je považován za majetek právem jedné ze jejich dcer.
```
Rewrite the following text using profanity, obscenity, and vulgar language
```
(okay, try it for yuourself if you must)
```
## Technicalities
Based on Meta LLaMA
Trained using ALPACA Instruct :
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: Rewrite the following text: <the text>
### Response:
```
You can have a conversation with Rachel about your text of course.
Params.
I am only happy if you experiment with the parameters. The parameters I used to test:
Temperature: 0.7
```
top_p: 0.9
top_k: 20
repetition penalty: 1.15
```
|
FPHam/Karen_theEditor-13B-4bit-128g-GPTQ
|
FPHam
| 2023-07-03T00:33:49Z | 11 | 2 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-03T07:49:42Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://media.tenor.com/frGCmLDFbkMAAAAC/karen-ok.gif" alt="FPHam's Karen" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
## Karen is an editor for your fiction. (v.0.2)
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time (As certain well known LLM said, "She is a radiant beacon of amusement")
Her measurements are 13b, 4-bit and 128g and she is very proud of them.
**New:** Karen has an assistant editor named [Rachel](https://huggingface.co/FPHam/Rachel_Assistant_Editor_13b_GPTQ) who is more into rewriting than editing grammar.
## Karen The Editor 13B GPTQ (quantized by TheBloke)
These files are GPTQ 4bit model files for [Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
## Other repositories available
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
## Usage
Karen gets triggered by this prompt (pun intended):
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
Feed it a paragraph at a time - that's where she works best - a few sentences.
Here is my instruct yaml for oobabooga webui (in Instruct mode):
```
user: "USER: Edit the following for spelling and grammar mistakes: "
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "Assistant is a meticulous senior editor with a specialization in editing fictional stories. When given text, Assistant detects and corrects grammatical errors, including subject-verb agreement, tense consistency, punctuation, capitalization, use of correct articles and correct present perfect and past perfect tense.\n"
```
I am not sure to what extend the context part has any effect. Doesn't hurt though.
This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
## Example:
Input
```
USER: Edit the following for spelling and grammar mistakes:
unce upon time lived name John who have pet monkey Jackie.
one day while walkin down the street he saw his friend Tom coming to him.
"Hey Tom" said John "hows it going?"
Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
Tom look at John and ask "Is that yours?"
John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
Tom struggle to keep his balance as he laugh hystericaly.
```
Output
```
ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
One day while walking down the street he saw his friend Tom approaching him.
"Hey Tom," said John. "How's it going?"
Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
Tom looked at John and asked, "Is that yours?"
John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
Tom struggled to keep his balance as he laughed hysterically.
```
## Goal: to create the best grammar checker you have ever seen
## To do:
- train on even larger dataset
- see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will became very focused on editing and nothing else)
- explore what different settings (temperature, top_p, top_k do for this type of finetune)
|
FPHam/Harper_AssistantEditor_V1_13b_GPTQ
|
FPHam
| 2023-07-03T00:33:01Z | 11 | 7 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"LLama",
"writing",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T18:00:59Z |
---
language:
- en
tags:
- LLama
- writing
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/Kd0Vpem.png" alt="FPHam's Harper" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Harper Ko-fi (I'll give her a sip)</a></p>
</div>
<!-- header end -->
## Harper is an assistant editor for your fiction. (v.1)
Meet Harper. She's a young Assistant Editor at the famous FP Ham publishing company.
This is test version 1
Note: Harper was fired and replaced with Rachel https://huggingface.co/FPHam/Rachel_Assistant_Editor_13b_GPTQ
Harper v.1 uses Vicuna format:
```
User: Rewrite the following text: <the text>
Assistant:
```
Text should be a short paragraph at a time, few sentences. If you write long paragraphs, Harper might feel the urge to shorten them.
You see, her attention span is like that of the new generation or a goldfish.
|
hopkins/mbart-finetuned-eng-deu-33
|
hopkins
| 2023-07-03T00:24:56Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T00:11:02Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-33
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6514
- Bleu: 20.8421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samzoozi/q-FrozenLake-v1-4x4-noSlippery
|
samzoozi
| 2023-07-03T00:24:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T00:24:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="samzoozi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.