modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 00:47:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 00:46:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
xzuyn/Pythia-Deduped-410M-GGML
|
xzuyn
| 2023-05-28T16:15:09Z | 0 | 0 | null |
[
"gpt_neox",
"region:us"
] | null | 2023-05-28T14:43:59Z |
---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/EleutherAI/pythia-410m-deduped
|
Yhyu13/baize-v2-7b-gptq-4bit
|
Yhyu13
| 2023-05-28T15:55:01Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-27T14:17:33Z |
---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/project-baize/baize-v2-7b
Baize is a lora training framework that allows fine-tuning LLaMA models on commondity GPUs.
Checkout my 13B Baize gptq 4bit here : https://huggingface.co/Yhyu13/baize-v2-13b-gptq-4bit
This model has an interesting behavior of generating further questions that help the conversation going. You can simply hit "Continue" and the chat goes all by itself

|
casque/eimifukada
|
casque
| 2023-05-28T15:28:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T15:26:45Z |
---
license: creativeml-openrail-m
---
|
maharishiva/ppo_v2-LunarLander-v2
|
maharishiva
| 2023-05-28T15:26:59Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T15:26:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.01 +/- 22.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kurapika993/Toxic_classifier_bert
|
Kurapika993
| 2023-05-28T15:11:36Z | 4 | 0 |
transformers
|
[
"transformers",
"bert",
"fill-mask",
"text-classification",
"en",
"dataset:jigsaw_toxicity_pred",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-17T18:49:19Z |
---
license: apache-2.0
datasets:
- jigsaw_toxicity_pred
metrics:
- accuracy
pipeline_tag: text-classification
language:
- en
library_name: transformers
---
|
Almoooo/LLM-train-on-Sahih-al-Bukhari
|
Almoooo
| 2023-05-28T15:03:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-28T03:24:27Z |
this is LLM-train-on-Sahih-al-Bukhari
|
jikkyjohn/roberta-base-finetuned-dapt-hotqa-v2-squad
|
jikkyjohn
| 2023-05-28T14:58:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-28T10:58:55Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: jikkyjohn/roberta-base-finetuned-dapt-hotqa-v2-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jikkyjohn/roberta-base-finetuned-dapt-hotqa-v2-squad
This model is a fine-tuned version of [jikkyjohn/roberta-base-MLM-retrainedonHotQA](https://huggingface.co/jikkyjohn/roberta-base-MLM-retrainedonHotQA) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7097
- Train End Logits Accuracy: 0.7986
- Train Start Logits Accuracy: 0.7579
- Validation Loss: 0.8542
- Validation End Logits Accuracy: 0.7652
- Validation Start Logits Accuracy: 0.7326
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 22142, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1288 | 0.6991 | 0.6612 | 0.8988 | 0.7584 | 0.7255 | 0 |
| 0.7097 | 0.7986 | 0.7579 | 0.8542 | 0.7652 | 0.7326 | 1 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
soypablo/ppo-Huggy
|
soypablo
| 2023-05-28T14:53:38Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-28T14:53:32Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: soypablo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KHEW/MALA
|
KHEW
| 2023-05-28T14:53:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T14:50:31Z |
---
license: creativeml-openrail-m
---
|
amjadfqs/swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_10
|
amjadfqs
| 2023-05-28T14:37:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-28T05:43:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375490966221524
- name: Precision
type: precision
value: 0.9451238954076366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_10
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9375
- F1 Score: 0.9383
- Precision: 0.9451
- Sensitivity: 0.9381
- Specificity: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 1.3428 | 0.99 | 19 | 0.7059 | 0.7467 | 0.7535 | 0.7951 | 0.7464 | 0.9332 |
| 0.3308 | 1.97 | 38 | 0.2314 | 0.9183 | 0.9194 | 0.9239 | 0.9191 | 0.9792 |
| 0.1601 | 2.96 | 57 | 0.2024 | 0.9305 | 0.9314 | 0.9349 | 0.9306 | 0.9824 |
| 0.0976 | 4.0 | 77 | 0.3376 | 0.8904 | 0.8943 | 0.9126 | 0.8930 | 0.9724 |
| 0.0585 | 4.99 | 96 | 0.3893 | 0.8830 | 0.8853 | 0.9115 | 0.8854 | 0.9706 |
| 0.0432 | 5.97 | 115 | 0.2559 | 0.9214 | 0.9239 | 0.9330 | 0.9237 | 0.9802 |
| 0.0313 | 6.96 | 134 | 0.2175 | 0.9375 | 0.9383 | 0.9451 | 0.9381 | 0.9843 |
| 0.0176 | 8.0 | 154 | 0.2309 | 0.9313 | 0.9326 | 0.9386 | 0.9320 | 0.9827 |
| 0.0152 | 8.99 | 173 | 0.2358 | 0.9328 | 0.9339 | 0.9416 | 0.9336 | 0.9831 |
| 0.0089 | 9.87 | 190 | 0.2116 | 0.9360 | 0.9374 | 0.9437 | 0.9372 | 0.9839 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BayesBayes/codeparrot-ds
|
BayesBayes
| 2023-05-28T14:33:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T22:22:53Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
indigorange/rl_course_vizdoom_health_gathering_supreme
|
indigorange
| 2023-05-28T14:18:35Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T14:18:28Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.44 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r indigorange/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AeroAlly/ppo-Lunalander-v2
|
AeroAlly
| 2023-05-28T14:14:21Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T14:14:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.78 +/- 19.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edata/ppo-Huggy
|
edata
| 2023-05-28T13:55:46Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-28T13:42:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: edata/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ilass/OktoberfestFoodDrinkModel
|
ilass
| 2023-05-28T13:50:54Z | 0 | 0 | null |
[
"persons",
"food",
"drinks",
"object-detection",
"dataset:ilass/OktoberfestFoodDatasetPlus",
"arxiv:1912.05007",
"license:bsd",
"region:us"
] |
object-detection
| 2023-05-28T13:20:24Z |
---
license: bsd
datasets:
- ilass/OktoberfestFoodDatasetPlus
pipeline_tag: object-detection
tags:
- persons
- food
- drinks
---
## Model Details
<!-- Provide a quick summary of what the model is/does. -->
This Model is trained on https://huggingface.co/datasets/ilass/OktoberfestFoodDatasetPlus
Its Categories are: drinkServed, foodServed, person
Its trained with yolo8
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ilassAG/OktoberfestFoodDataset
- **Paper:** https://arxiv.org/abs/1912.05007
- **Demo:** https://huggingface.co/datasets/ilass/OktoberfestFoodDatasetPlus
|
twnlp/chinese-macbert-base-similarity
|
twnlp
| 2023-05-28T13:22:20Z | 161 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-06T13:31:28Z |
---
language:
- zh
---
MacBERT is an improved BERT with novel MLM as correction pre-training task, which mitigates the discrepancy of pre-training and fine-tuning.Retrain on a large Chinese matching dataset.
|
smartik/mbart-large-50-finetuned-ua-gec-2.0
|
smartik
| 2023-05-28T13:22:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-28T12:22:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbart-large-50-finetuned-ua-gec-2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-ua-gec-2.0
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4067
- Rouge1: 18.2963
- Rouge2: 10.2365
- Rougel: 18.2593
- Rougelsum: 18.2759
- Gen Len: 28.6533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2043 | 1.0 | 2020 | 0.3299 | 18.4389 | 10.2365 | 18.3963 | 18.4009 | 28.6513 |
| 0.1058 | 2.0 | 4040 | 0.3667 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.6207 |
| 0.0585 | 3.0 | 6060 | 0.4067 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.6533 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RajuKandasamy/ponniyinselvan_1.4b_alpha
|
RajuKandasamy
| 2023-05-28T13:18:56Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-28T12:47:58Z |
---
license: apache-2.0
language:
- ta
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is trained on PonniyinSelvan tamil corpus dataset.
## Model Details
Base model used is EleutherAI's Pythia 1.4b
### Model Description
- **Finetuned from model [optional]:** Pythia 1.4b
## Uses
Purely education and research purposes only. Not fit for any kind of practical use.
## Bias, Risks, and Limitations
The base model Bias, Risks and Limitations apply
## How to Get Started with the Model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "RajuKandasamy/ponniyinselvan_1.4b_alpha"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_path, load_in_8bit=False).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt="""வந்தியத்தேவன்"""
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
attention_mask = torch.ones_like(input_ids).to(model.device)
print("Thinking ...\n ")
with torch.no_grad():
output = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=256, early_stopping=False, temperature=0.9, top_p=0.9,top_k=500, do_sample=True,output_scores=True, pad_token_id=tokenizer.eos_token_id, repetition_penalty=1.2,eos_token_id=tokenizer.eos_token_id)
output_str = tokenizer.decode(output[0], skip_special_tokens=False)
print(output_str)
```
## Training Details
10 epochs
### Training Data
ponniyinselvan text corpus
### Training Procedure
Casual Language Modelling, With custom BPE tokenizer
|
Xedmos/LunarLander
|
Xedmos
| 2023-05-28T13:13:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T13:13:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.67 +/- 18.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kevin8093/pokemon-lora
|
Kevin8093
| 2023-05-28T12:57:03Z | 5 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-28T07:55:10Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Kevin8093/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
torrandodideas/test1
|
torrandodideas
| 2023-05-28T12:52:14Z | 0 | 0 |
nemo
|
[
"nemo",
"STT",
"automatic-speech-recognition",
"pt",
"dataset:databricks/databricks-dolly-15k",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] |
automatic-speech-recognition
| 2023-05-28T12:47:13Z |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
language:
- pt
metrics:
- accuracy
- bertscore
library_name: nemo
pipeline_tag: automatic-speech-recognition
tags:
- STT
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CleverShovel/falcon-7b-instruct-sharded-bf16
|
CleverShovel
| 2023-05-28T12:28:11Z | 23 | 3 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T17:36:10Z |
Copy of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) but sharded. Please refer to the original repo for details about license/dataset/etc.
Here is [notebook](https://colab.research.google.com/drive/12sqdlzv9vRuYhDFbh0kIU8BL7de24uST?usp=sharing) for sharding.
|
smartik/mbart-large-50-finetuned-ua-gec
|
smartik
| 2023-05-28T12:21:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-28T08:12:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbart-large-50-finetuned-ua-gec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-ua-gec
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3259
- Rouge1: 18.4389
- Rouge2: 10.2365
- Rougel: 18.3963
- Rougelsum: 18.4009
- Gen Len: 28.6067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2207 | 1.0 | 4040 | 0.3259 | 18.4389 | 10.2365 | 18.3963 | 18.4009 | 28.6067 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
koorukuroo/KcELECTRA_base_beep
|
koorukuroo
| 2023-05-28T12:02:33Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-06T10:20:25Z |
---
license: mit
---
BEEP! 데이터셋으로 Epoch 10으로 파인튜닝하여 결과를 살펴보겠습니다.
| | Loss | Acc | Prec | Rec | F1 |
|-----|------|-------|------|-------|-------|
|TRAIN| 0.11 | 0.965 | 0.966| 0.972 | 0.969 |
| VAL | 0.73 | 0.807 | 0.947| 0.749 | 0.837 |
threshold 0.5 기준으로 구분하였을 때, dev 데이터셋에 대한 정확도는 0.85 입니다.
그리고 임베딩 결과물을 t-SNE로 시각화하여보았습니다.
https://v5.core.today/notebook/34XX0RYM4#KcELECTRA_base_beep.ipynb
```python
model = Model.load_from_checkpoint(latest_ckpt);
def infer(x):
return torch.softmax(
model(**model.tokenizer(x, return_tensors='pt')
).logits, dim=-1)
```
```
infer('송중기 시대극은 믿고본다. 첫회 신선하고 좋았다.')
```
```
tensor([[0.7414, 0.2586]], grad_fn=<SoftmaxBackward>)
```
```
infer('유이 자연스러워진 연기')
```
```
tensor([[0.7627, 0.2373]], grad_fn=<SoftmaxBackward>)
```
|
fedorn/ppo-LunarLander-v2
|
fedorn
| 2023-05-28T12:00:55Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T10:03:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.27 +/- 17.43
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vc-Cpt/my_cust_events_model
|
Vc-Cpt
| 2023-05-28T11:59:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-28T11:44:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_cust_events_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_cust_events_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9138
- Rouge1: 0.0709
- Rouge2: 0.0164
- Rougel: 0.0599
- Rougelsum: 0.06
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 24 | 3.4693 | 0.0661 | 0.0194 | 0.0578 | 0.0577 | 19.0 |
| No log | 2.0 | 48 | 3.1028 | 0.0707 | 0.0186 | 0.0622 | 0.0626 | 19.0 |
| No log | 3.0 | 72 | 2.9566 | 0.0712 | 0.0164 | 0.0601 | 0.0601 | 19.0 |
| No log | 4.0 | 96 | 2.9138 | 0.0709 | 0.0164 | 0.0599 | 0.06 | 19.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ahmedbilal5/t5-base-QG-finetuned-FairytaleQA
|
ahmedbilal5
| 2023-05-28T11:57:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:fairytale_qa",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-05T14:23:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fairytale_qa
metrics:
- rouge
model-index:
- name: t5-base-QG-finetuned-FairytaleQA
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: fairytale_qa
type: fairytale_qa
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 45.1292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-QG-finetuned-FairytaleQA
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the fairytale_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0876
- Rouge1: 45.1292
- Rouge2: 26.5987
- Rougel: 43.2701
- Rougelsum: 43.2744
- Gen Len: 15.1024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.1882 | 1.0 | 2137 | 1.0876 | 45.1292 | 26.5987 | 43.2701 | 43.2744 | 15.1024 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
schibfab/landscape_classification_vgg16_fine_tuned-v1
|
schibfab
| 2023-05-28T11:53:07Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-28T10:06:41Z |
---
pipeline_tag: image-classification
---
|
Gamabumba/Taxi-v3
|
Gamabumba
| 2023-05-28T11:37:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T11:36:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Gamabumba/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Vc-Cpt/my_awesome_billsum_model
|
Vc-Cpt
| 2023-05-28T11:32:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-28T11:01:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1478
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6126
- Rouge1: 0.1478
- Rouge2: 0.0546
- Rougel: 0.1226
- Rougelsum: 0.1228
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8882 | 0.1266 | 0.0362 | 0.1065 | 0.1065 | 19.0 |
| No log | 2.0 | 124 | 2.6902 | 0.1368 | 0.045 | 0.1134 | 0.1134 | 19.0 |
| No log | 3.0 | 186 | 2.6293 | 0.143 | 0.0496 | 0.118 | 0.1183 | 19.0 |
| No log | 4.0 | 248 | 2.6126 | 0.1478 | 0.0546 | 0.1226 | 0.1228 | 19.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YCHuang2112/q-Taxi-v3
|
YCHuang2112
| 2023-05-28T11:17:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T11:17:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YCHuang2112/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
art3mis0970/moon
|
art3mis0970
| 2023-05-28T11:14:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T11:14:12Z |
---
license: creativeml-openrail-m
---
|
YakovElm/Qt20Classic_256
|
YakovElm
| 2023-05-28T11:01:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-28T11:01:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt20Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt20Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1866
- Train Accuracy: 0.9454
- Validation Loss: 0.1784
- Validation Accuracy: 0.9586
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2203 | 0.9383 | 0.1651 | 0.9586 | 0 |
| 0.2026 | 0.9462 | 0.1571 | 0.9586 | 1 |
| 0.1866 | 0.9454 | 0.1784 | 0.9586 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BAK-HIRI/rl_course_vizdoom_health_gathering_supreme
|
BAK-HIRI
| 2023-05-28T11:01:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T11:01:06Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.79 +/- 2.97
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r BAK-HIRI/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
NightHaven/NabNab
|
NightHaven
| 2023-05-28T10:52:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T10:51:54Z |
---
license: creativeml-openrail-m
---
|
ugiugi/distilbert
|
ugiugi
| 2023-05-28T10:35:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-28T08:49:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1149
- Accuracy: 0.4704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
glitchyordis/LunarLanderhf-v2
|
glitchyordis
| 2023-05-28T10:30:02Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T10:25:53Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -138.38 +/- 88.83
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo_hf'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'glitchyordis/LunarLanderhf-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
glitchyordis/LunarLander-v2
|
glitchyordis
| 2023-05-28T10:28:30Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T10:16:37Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -113.79 +/- 36.26
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'env_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 100000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'glitchyordis/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
romaniya/Truma
|
romaniya
| 2023-05-28T10:22:19Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-28T10:22:19Z |
---
license: bigscience-openrail-m
---
|
YCHuang2112/q-FrozenLake-v1-8x8-Slippery
|
YCHuang2112
| 2023-05-28T10:18:32Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T03:21:27Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.60 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="YCHuang2112/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
RinInori/bert-base-uncased_finetuned_sentiments
|
RinInori
| 2023-05-28T10:16:19Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T04:18:25Z |
---
language: en
license: apache-2.0
datasets:
- custom
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# BERT ForSequenceClassification Fine-tuned for Sentiment Analysis
This model is a fine-tuned version of the `BERT ForSequenceClassification` model for sentiment analysis.
It is trained on a dataset of texts with six different emotions: anger, fear, joy, love, sadness, and surprise.
The model was trained and tested on a labeled dataset from [Kaggle](https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp).
Github link:
https://github.com/hennypurwadi/Bert_FineTune_Sentiment_Analysis
The labeled dataset I used to fine-tune and train the model can be found at:
https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp?select=train.txt
## Model Training Details
- **Pretrained model**: `bert-base-uncased` ("uncased" means the model was trained on lowercased text)
- **Number of labels**: 6:
- "Label_0": "anger",
- "Label_1": "fear",
- "Label_2": "joy"
- "Label_3": "love",
- "Label_4": "sadness",
- "Label_5": "surprise"
-
- **Learning rate**: 2e-5
- **Epsilon**: 1e-8
- **Epochs**: 10
- **Warmup steps**: 0
- **Optimizer**: AdamW with correct_bias=False
## Dataset
The model was trained and tested on a labeled dataset from [Kaggle](https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp).
##To predict the sentiments on unlabeled datasets, use the predict_sentiments function provided in this repository.
## The unlabeled daataset to be predicted should have a single column named "text".
Predict Unlabeled dataset collected from Twitter (dc_America.csv)
predict_sentiments(model_name, tokenizer_name, '/content/drive/MyDrive/DLBBT01/data/c_unlabeled/dc_America.csv')
##To load and use the model and tokenizer, use the following code:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
import pandas as pd
def predict_sentiments(model_name, tokenizer_name, input_file):
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
df = pd.read_csv(input_file)
# Tokenize input text
test_inputs = tokenizer(list(df['text']), padding=True, truncation=True, max_length=128, return_tensors='pt')
# Make predictions
with torch.no_grad():
model.eval()
outputs = model(test_inputs['input_ids'], token_type_ids=None, attention_mask=test_inputs['attention_mask'])
logits = outputs[0].detach().cpu().numpy()
predictions = logits.argmax(axis=-1)
# Map the predicted labels back to their original names
int2label = {0: 'anger', 1: 'fear', 2: 'joy', 3: 'love', 4: 'sadness', 5: 'surprise'}
predicted_labels = [int2label[p] for p in predictions]
# Add the predicted labels to the test dataframe
df['label'] = predicted_labels
# Save the predictions to a file
output_file = input_file.replace(".csv", "_predicted.csv")
df.to_csv(output_file, index=False)
model_name = "RinInori/bert-base-uncased_finetune_sentiments"
tokenizer_name = "RinInori/bert-base-uncased_finetune_sentiments"
#Predict Unlabeled data
predict_sentiments(model_name, tokenizer_name, '/content/drive/MyDrive/DLBBT01/data/c_unlabeled/dc_America.csv')
# Load predicted data
df_Am = pd.read_csv('/content/drive/MyDrive/DLBBT01/data/c_unlabeled/dc_America_predicted.csv')
df_Am.head()
from transformers import AutoTokenizer
import matplotlib.pyplot as plt
# Load tokenizer
tokenizer_name = "RinInori/bert-base-uncased_finetune_sentiments"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, do_lower_case=True)
# Load dataset
input_file = '/content/drive/MyDrive/DLBBT01/data/c_unlabeled/dc_America_predicted.csv'
df_Am = pd.read_csv(input_file)
# Examine distribution of data based on labels
sentences = df_Am.text.values
print("Distribution of data based on labels: ", df_Am.label.value_counts())
MAX_LEN = 512
# Plot label
label_count = df_Am['label'].value_counts()
plot_users = label_count.plot.pie(autopct='%1.1f%%', figsize=(4, 4))
plt.rc('axes', unicode_minus=False)
|
chrissinerantzi/LTAexperiments
|
chrissinerantzi
| 2023-05-28T10:04:52Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-05-28T10:04:52Z |
---
license: cc-by-nc-sa-4.0
---
|
RootYuan/RedLing-7B-v0.1
|
RootYuan
| 2023-05-28T09:18:30Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-24T05:09:20Z |
---
license: apache-2.0
---
````
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer, StoppingCriteria, StoppingCriteriaList
stop_token_ids = tokenizer.convert_tokens_to_ids(["<EOT>"])
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
for stop_id in stop_token_ids:
if input_ids[0][-1] == stop_id:
return True
return False
````
````
PROMPT_TEMPLATE = "USER:{user}<EOT>ASSISTANT:"
user = "Create a daily itinerary based on the given information. Our family is looking for a 9-day Morocco trip that has light to moderate activity levels."
prompts = PROMPT_TEMPLATE.format(user=user)
inputs = tokenizer(prompts, return_tensors="pt")
generation_kwargs = dict(
inputs=inputs,
max_new_tokens=2048,
stopping_criteria=StoppingCriteriaList([stop]),
)
outputs = model.generate(**generation_kwargs)
ans = tokenizer.decode(outputs[0])[[len(prompts):]]
````
RedLing-7B was finrtuned based on RedPajama-7B. Here are some amazing results.
1.
USER:
Design a syllabus for the given course. Students should be given a list of the chapters with brief explanations of each chapter's purpose.
Programming for Everybody (Getting Started with Python)
ASSISTANT:
Welcome to Programming for Everybody (Getting Started with Python)! In this course, we will cover the basics of programming in Python. We will start with the basics of Python syntax and data types, and then move on to more advanced topics such as control structures, functions, and object-oriented programming. By the end of the course, you will have a solid understanding of programming in Python and be ready to start creating your own programs.
Chapter 1: Introduction to Programming
- Introduction to programming
- Installing Python
- Setting up a development environment
- Hello, World!
Chapter 2: Control Structures
- Control structures
- Loops
- If-else statements
- For loops
- While loops
- Break and continue statements
Chapter 3: Functions
- Functions
- Creating a function
- Calling a function
- Returning from a function
- Passing functions as arguments
Chapter 4: Objects and Classes
- Objects and classes
- Creating objects
- Inheritance and polymorphism
- Accessing object attributes and methods
Chapter 5: Data Types
- Data types
- Integers
- Floating-point numbers
- Strings
- Lists
- Tuples
- Dictionaries
Chapter 6: Control Structures Part 2
- Control structures part 2
- Advanced loops
- Advanced control structures
Chapter 7: File I/O
- File I/O
- Reading and writing files
- Opening files
- Closing files
Chapter 8: Web Development
- Web development
- HTML
- CSS
- JavaScript
- Web APIs
Chapter 9: Testing
- Testing
- Unit testing
- Integration testing
- System testing
Chapter 10: Advanced Topics
- Advanced topics
- Object-oriented programming
- Functional programming
- Concurrency and parallelism
- Data visualization
Congratulations on completing Programming for Everybody (Getting Started with Python)!
2.
USER:
List the personality traits that are required to be successful in the given job.
Social Media Marketer
ASSISTANT:
The personality traits required to be successful in the job of a social media marketer include creativity, adaptability, attention to detail, excellent communication skills, and the ability to work well under pressure
3.
USER:
Please list the courses that someone interested in the first course might like.
Bitcoin and Cryptocurrency Technologies
ASSISTANT:
If someone is interested in the first course, they might also be interested in the following courses:
- Introduction to Blockchain
- Cryptography and Network Security
- Digital Currencies and Their Impact on the Financial System
4.
USER:
Come up with the courses that one is supposed to take in order to be an expert in a given field.
Graphic Design
ASSISTANT:
To be an expert in graphic design, one should take courses in typography, layout, color theory, and design history. Additionally, one should have experience working with various design software such as Adobe Photoshop, Illustrator, and InDesign.
ASSISTANT2:
To become an expert in graphic design, one should take courses in typography, layout design, color theory, and visual communication. Additionally, one should gain practical experience through internships or freelance work.
ChatGPT:
Both response1 and response2 provide valuable information on the courses one should take to become an expert in graphic design. However, if we were to compare the two, response2 provides slightly more comprehensive information by mentioning visual communication as an important aspect of graphic design. It also emphasizes the importance of gaining practical experience through internships or freelance work, which is an essential component of becoming an expert in any field. Therefore, response2 can be considered slightly better in terms of providing a more well-rounded perspective on the necessary courses and experiences for becoming an expert in graphic design.
5.
USER:
Create a daily itinerary based on the given information.
Our family is looking for a 9-day Morocco trip that has light to moderate activity levels.
ASSISTANT:
Morocco is a great choice for a family trip. Here's a 9-day itinerary that includes light to moderate activity levels:
Day 1: Arrival in Marrakech and visit the Bahia Palace and the Saadian Tombs.
Day 2: Visit the Majorelle Gardens and the Menara Gardens.
Day 3: Take a camel ride in the Sahara Desert.
Day 4: Visit the Ait Ben Haddou Kasbah and the Ouzoud Waterfalls.
Day 5: Visit the Djemaa el Fna square and the Ben Youssef Medersa.
Day 6: Take a cooking class and learn how to make traditional Moroccan dishes.
Day 7: Visit the Kasbah of the Udayas and the Hassan II Mosque.
Day 8: Visit the Dakhla Oasis and the Todra Gorge.
Day 9: Departure from Marrakech.
This itinerary includes a mix of city exploration, outdoor activities, and cultural experiences. It's a great way to see the highlights of Morocco while staying active and engaged.
6.
instruction:
Give students tips on how to keep their nerves under control during class presentations.
response1:
1. Practice, practice, practice: The more you practice, the more comfortable you will become with the material and the delivery.
2. Use visual aids: Visual aids such as slides or props can help keep your audience engaged and prevent you from feeling nervous.
3. Use vocal variety: Vary your tone, pitch, and volume to keep your audience interested.
4. Focus on breathing: Take deep breaths before and during your presentation to calm your nerves.
5. Use positive self-talk: Tell yourself that you are capable and prepared for the task at hand.
6. Seek support: Reach out to friends, family, or a professional mentor for support and encouragement.
7. Be confident: Believe in yourself and your ability to deliver a successful presentation.
8. Visualize success: Imagine yourself delivering a successful presentation and receiving positive feedback from your audience.
9. Eliminate distractions: Try to eliminate any potential distractions before your presentation, such as turning off your phone or finding a quiet place to take notes.
10. Take a break: If you feel overwhelmed or stressed during your presentation, take a break and come back with fresh energy and focus.
response2:
Here are some tips to help students keep their nerves under control during class presentations:
1. Practice your presentation beforehand to feel more confident.
2. Take deep breaths to calm yourself down.
3. Visualize a successful presentation to boost your confidence.
4. Use positive self-talk to motivate yourself.
5. Focus on the content of your presentation rather than the audience.
6. Engage with the audience by making eye contact and asking questions.
7. Be aware of your body language and posture to project confidence.
8. Take a break if you feel overwhelmed.
|
AustinCarthy/Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB
|
AustinCarthy
| 2023-05-28T09:09:27Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-05-28T02:10:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benign_40K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0177
- Accuracy: 0.9976
- F1: 0.9746
- Precision: 0.9994
- Recall: 0.951
- Roc Auc Score: 0.9755
- Tpr At Fpr 0.01: 0.965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0034 | 1.0 | 66875 | 0.0124 | 0.9970 | 0.9680 | 0.9926 | 0.9446 | 0.9721 | 0.9258 |
| 0.0015 | 2.0 | 133750 | 0.0227 | 0.9969 | 0.9667 | 0.9974 | 0.9378 | 0.9688 | 0.9346 |
| 0.0011 | 3.0 | 200625 | 0.0224 | 0.9969 | 0.9669 | 0.9991 | 0.9366 | 0.9683 | 0.9476 |
| 0.0005 | 4.0 | 267500 | 0.0200 | 0.9975 | 0.9731 | 0.9992 | 0.9484 | 0.9742 | 0.9618 |
| 0.0006 | 5.0 | 334375 | 0.0177 | 0.9976 | 0.9746 | 0.9994 | 0.951 | 0.9755 | 0.965 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
soBeauty/xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524
|
soBeauty
| 2023-05-28T08:59:37Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-27T14:22:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DAMO-NLP-SG/zero-shot-classify-SSTuning-large
|
DAMO-NLP-SG
| 2023-05-28T08:33:42Z | 344 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Zero-Shot Classification",
"zero-shot-classification",
"arxiv:2305.11442",
"license:mit",
"autotrain_compatible",
"region:us"
] |
zero-shot-classification
| 2023-05-19T22:57:21Z |
---
inference: false
license: mit
tags:
- Zero-Shot Classification
pipeline_tag: zero-shot-classification
---
# Zero-shot text classification (large-sized model) trained with self-supervised tuning
Zero-shot text classification model trained with self-supervised tuning (SSTuning).
It was introduced in the paper [Zero-Shot Text Classification via Self-Supervised Tuning](https://arxiv.org/abs/2305.11442) by
Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
The model backbone is RoBERTa-large.
## Model description
The model is tuned with unlabeled data using a learning objective called first sentence prediction (FSP).
The FSP task is designed by considering both the nature of the unlabeled corpus and the input/output format of classification tasks.
The training and validation sets are constructed from the unlabeled corpus using FSP.
During tuning, BERT-like pre-trained masked language
models such as RoBERTa and ALBERT are employed as the backbone, and an output layer for classification is added.
The learning objective for FSP is to predict the index of the correct label.
A cross-entropy loss is used for tuning the model.
## Model variations
There are three versions of models released. The details are:
| Model | Backbone | #params | accuracy | Speed | #Training data
|------------|-----------|----------|-------|-------|----|
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | Low | High | 20.48M |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
Please note that zero-shot-classify-SSTuning-base is trained with more data (20.48M) than the paper, as this will increase the accuracy.
## Intended uses & limitations
The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
The number of labels should be 2 ~ 20.
### How to use
You can try the model with the Colab [Notebook](https://colab.research.google.com/drive/17bqc8cXFF-wDmZ0o8j7sbrQB9Cq7Gowr?usp=sharing).
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch, string, random
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-large")
model = AutoModelForSequenceClassification.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-large")
text = "I love this place! The food is always so fresh and delicious."
list_label = ["negative", "positive"]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
list_ABC = [x for x in string.ascii_uppercase]
def check_text(model, text, list_label, shuffle=False):
list_label = [x+'.' if x[-1] != '.' else x for x in list_label]
list_label_new = list_label + [tokenizer.pad_token]* (20 - len(list_label))
if shuffle:
random.shuffle(list_label_new)
s_option = ' '.join(['('+list_ABC[i]+') '+list_label_new[i] for i in range(len(list_label_new))])
text = f'{s_option} {tokenizer.sep_token} {text}'
model.to(device).eval()
encoding = tokenizer([text],truncation=True, max_length=512,return_tensors='pt')
item = {key: val.to(device) for key, val in encoding.items()}
logits = model(**item).logits
logits = logits if shuffle else logits[:,0:len(list_label)]
probs = torch.nn.functional.softmax(logits, dim = -1).tolist()
predictions = torch.argmax(logits, dim=-1).item()
probabilities = [round(x,5) for x in probs[0]]
print(f'prediction: {predictions} => ({list_ABC[predictions]}) {list_label_new[predictions]}')
print(f'probability: {round(probabilities[predictions]*100,2)}%')
check_text(model, text, list_label)
# prediction: 1 => (B) positive.
# probability: 99.84%
```
### BibTeX entry and citation info
```bibtxt
@inproceedings{acl23/SSTuning,
author = {Chaoqun Liu and
Wenxuan Zhang and
Guizhen Chen and
Xiaobao Wu and
Anh Tuan Luu and
Chip Hong Chang and
Lidong Bing},
title = {Zero-Shot Text Classification via Self-Supervised Tuning},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
year = {2023},
url = {https://arxiv.org/abs/2305.11442},
}
```
|
bagassword21/maudygoon
|
bagassword21
| 2023-05-28T08:30:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T08:29:49Z |
---
license: creativeml-openrail-m
---
|
eVaggelia/myNewModel
|
eVaggelia
| 2023-05-28T08:18:15Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-07T12:19:30Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: eVaggelia/myNewModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eVaggelia/myNewModel
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.2014
- Validation Loss: 9.9498
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2014 | 9.9498 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
innovation64/ppo-SnowballTarget
|
innovation64
| 2023-05-28T08:04:22Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-28T08:04:17Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: innovation64/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shamiulshifat/ppo-Huggy
|
shamiulshifat
| 2023-05-28T07:39:32Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-28T07:39:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: shamiulshifat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jonastokoliu/audio_cls_wavlm-libri-clean-100h-base-plus_minds14_finetune
|
jonastokoliu
| 2023-05-28T07:12:00Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-28T07:08:09Z |
---
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_cls_wavlm-libri-clean-100h-base-plus_minds14_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_cls_wavlm-libri-clean-100h-base-plus_minds14_finetune
This model is a fine-tuned version of [patrickvonplaten/wavlm-libri-clean-100h-base-plus](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6462
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 2.6464 | 0.0885 |
| No log | 2.0 | 4 | 2.6503 | 0.0442 |
| No log | 3.0 | 6 | 2.6485 | 0.0619 |
| No log | 4.0 | 8 | 2.6484 | 0.0708 |
| 2.6352 | 5.0 | 10 | 2.6473 | 0.0708 |
| 2.6352 | 6.0 | 12 | 2.6476 | 0.0708 |
| 2.6352 | 7.0 | 14 | 2.6473 | 0.0708 |
| 2.6352 | 8.0 | 16 | 2.6466 | 0.0708 |
| 2.6352 | 9.0 | 18 | 2.6462 | 0.0708 |
| 2.6349 | 10.0 | 20 | 2.6462 | 0.0708 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jonastokoliu/audio_cls_wav2vec2-base_minds14_finetune
|
jonastokoliu
| 2023-05-28T07:06:27Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-28T06:55:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_cls_wav2vec2-base_minds14_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_cls_wav2vec2-base_minds14_finetune
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6523
- Accuracy: 0.0619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 2.6391 | 0.0708 |
| No log | 2.0 | 4 | 2.6421 | 0.0442 |
| No log | 3.0 | 6 | 2.6465 | 0.0354 |
| No log | 4.0 | 8 | 2.6486 | 0.0442 |
| 2.6349 | 5.0 | 10 | 2.6502 | 0.0442 |
| 2.6349 | 6.0 | 12 | 2.6507 | 0.0531 |
| 2.6349 | 7.0 | 14 | 2.6514 | 0.0619 |
| 2.6349 | 8.0 | 16 | 2.6528 | 0.0619 |
| 2.6349 | 9.0 | 18 | 2.6528 | 0.0619 |
| 2.6278 | 10.0 | 20 | 2.6523 | 0.0619 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Abhinay0926/ppo-Huggy
|
Abhinay0926
| 2023-05-28T06:41:26Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-28T06:41:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Abhinay322433/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HasinMDG/distilroberta_SD_government_v2
|
HasinMDG
| 2023-05-28T06:23:03Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-28T06:22:52Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/distilroberta_SD_government_v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/distilroberta_SD_government_v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
torreygooch/ppo-Huggy3
|
torreygooch
| 2023-05-28T05:53:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-28T05:46:48Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: torreygooch/ppo-Huggy3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sd-concepts-library/entropyvibe1
|
sd-concepts-library
| 2023-05-28T05:52:55Z | 0 | 1 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-05-28T05:52:50Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### entropyvibe1 on Stable Diffusion
This is the `<entropy-vibe1>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
YakovElm/Hyperledger10Classic_512
|
YakovElm
| 2023-05-28T05:45:24Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-28T05:44:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger10Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger10Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2833
- Train Accuracy: 0.8900
- Validation Loss: 0.3935
- Validation Accuracy: 0.8610
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3645 | 0.8731 | 0.3704 | 0.8600 | 0 |
| 0.3302 | 0.8838 | 0.3660 | 0.8600 | 1 |
| 0.2833 | 0.8900 | 0.3935 | 0.8610 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Oxygene-Monitor/twitter-roberta-base-sentiment-latest
|
Oxygene-Monitor
| 2023-05-28T05:45:10Z | 6 | 0 | null |
[
"pytorch",
"tf",
"roberta",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"region:us"
] | null | 2024-09-30T19:13:57Z |
---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
```
### References
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
author = "Camacho-collados, Jose and
Rezaee, Kiamehr and
Riahi, Talayeh and
Ushio, Asahi and
Loureiro, Daniel and
Antypas, Dimosthenis and
Boisson, Joanne and
Espinosa Anke, Luis and
Liu, Fangyu and
Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-demos.5",
pages = "38--49"
}
```
```
@inproceedings{loureiro-etal-2022-timelms,
title = "{T}ime{LM}s: Diachronic Language Models from {T}witter",
author = "Loureiro, Daniel and
Barbieri, Francesco and
Neves, Leonardo and
Espinosa Anke, Luis and
Camacho-collados, Jose",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-demo.25",
doi = "10.18653/v1/2022.acl-demo.25",
pages = "251--260"
}
```
|
Johnhex/Clam1.3
|
Johnhex
| 2023-05-28T05:44:23Z | 2 | 2 |
diffusers
|
[
"diffusers",
"stable duffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-28T05:41:37Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable duffusion
---
|
HasinMDG/MLM_distilroberta_SD_company
|
HasinMDG
| 2023-05-28T05:30:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-28T05:30:06Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/MLM_distilroberta_SD_company
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/MLM_distilroberta_SD_company")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Imran1/bloom_p560m_3
|
Imran1
| 2023-05-28T05:15:31Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-28T04:40:34Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom_p560m_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_p560m_3
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Qt15Classic_256
|
YakovElm
| 2023-05-28T05:13:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-28T05:12:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt15Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt15Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2023
- Train Accuracy: 0.9373
- Validation Loss: 0.2062
- Validation Accuracy: 0.9465
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2403 | 0.9367 | 0.2023 | 0.9505 | 0 |
| 0.2233 | 0.9367 | 0.1936 | 0.9505 | 1 |
| 0.2023 | 0.9373 | 0.2062 | 0.9465 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rudhi/iznaa
|
Rudhi
| 2023-05-28T05:12:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T05:08:04Z |
---
license: creativeml-openrail-m
---
|
firqaaa/indo-alpaca-lora-7b
|
firqaaa
| 2023-05-28T05:09:14Z | 0 | 2 |
transformers
|
[
"transformers",
"llama",
"alpaca",
"lora",
"text-generation",
"id",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-29T02:56:09Z |
---
language:
- id
pipeline_tag: text-generation
license: cc-by-nc-4.0
library_name: transformers
tags:
- llama
- alpaca
- lora
---
# About :
This 🦙 Llama model was trained on a translated Alpaca dataset in Bahasa Indonesia. It uses Parameter Efficient Fine Tuning and LoRA to enable training on consumer-grade GPU hardware.
# How to Use :
## Load the 🦙 Alpaca-LoRA model
```python
import torch
import bitsandbytes as bnb
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel, PeftConfig, prepare_model_for_int8_training, LoraConfig, get_peft_model
peft_model_id = "firqaaa/indo-Alpaca-LoRA-7b"
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto")
# Load the LoRA model
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Prompt Template
Prepare the prompt template
```python
instruction = "Tuliskan deret bilangan fibbonaci. Tulis jawaban/respons dalam Bahasa Indonesia."
PROMPT = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
## Evaluation
feel free to change the parameters inside `GenerationConfig` to get better result.
```python
inputs = tokenizer(
PROMPT,
return_tensors="pt"
)
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.95,
top_k=40,
num_beams=4,
repetition_penalty=1.15,
)
print("Generating...")
print("Instruction : {}".format(instruction))
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=512,
)
print("Response : ")
for s in generation_output.sequences:
print(tokenizer.decode(s).split("### Response:")[1])
```
## Note :
Due to the high loss and lack of compute unit, we will update this model frequently to ensure the quality of generated text
|
cardiffnlp/twitter-roberta-base-emotion-multilabel-latest
|
cardiffnlp
| 2023-05-28T05:08:45Z | 22,686 | 20 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-01T21:14:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: twitter-roberta-base-emotion-multilabel-latest
results: []
pipeline_tag: text-classification
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-emotion-multilabel-latest
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) on the
[`SemEval 2018 - Task 1 Affect in Tweets`](https://aclanthology.org/S18-1001/) `(subtask: E-c / multilabel classification)`.
## Performance
Following metrics are achieved on the test split:
- F1 (micro): 0.7169
- F1 (macro): 0.5464
- Jaccard Index (samples): 0.5970:
### Usage
#### 1. [tweetnlp](https://pypi.org/project/tweetnlp/)
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.load_model('topic_classification', model_name='cardiffnlp/twitter-roberta-base-emotion-multilabel-latest')
model.predict("I bet everything will work out in the end :)")
>> {'label': ['joy', 'optimism']}
```
#### 2. pipeline
```shell
pip install -U tensorflow==2.10
```
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-emotion-multilabel-latest", return_all_scores=True)
pipe("I bet everything will work out in the end :)")
>> [[{'label': 'anger', 'score': 0.018903767690062523},
{'label': 'anticipation', 'score': 0.28172484040260315},
{'label': 'disgust', 'score': 0.011607927270233631},
{'label': 'fear', 'score': 0.036411102861166},
{'label': 'joy', 'score': 0.8812029361724854},
{'label': 'love', 'score': 0.09591569006443024},
{'label': 'optimism', 'score': 0.9810988306999207},
{'label': 'pessimism', 'score': 0.016823478043079376},
{'label': 'sadness', 'score': 0.01889917254447937},
{'label': 'surprise', 'score': 0.02702752873301506},
{'label': 'trust', 'score': 0.4155798852443695}]]
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{\'\i}nez-C{\'a}mara, Eugenio and others},
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
cardiffnlp/twitter-roberta-base-emotion
|
cardiffnlp
| 2023-05-28T05:08:00Z | 282,970 | 42 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model.
See [twitter-roberta-base-emotion-multilabel-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion-multilabel-latest) and [TweetNLP](https://github.com/cardiffnlp/tweetnlp) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emotion'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Celebrating my promotion 😎"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Celebrating my promotion 😎"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) joy 0.9382
2) optimism 0.0362
3) anger 0.0145
4) sadness 0.0112
```
|
qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-2
|
qbao775
| 2023-05-28T05:00:13Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"logical-reasoning",
"logical-equivalence",
"constrastive-learning",
"en",
"arxiv:2305.12599",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-16T09:38:02Z |
---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- logical-reasoning
- logical-equivalence
- constrastive-learning
---
# AMR-LE
This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.
Here is the original links for AMR-LE including paper, project and leaderboard.
Paper: https://arxiv.org/abs/2305.12599
Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347
In this repository, we upload the model weight which has been trained on the dataset that has the ratio of positive sample and negative sample as 1 and 2. We use AMR with four logical equivalence laws `(Contraposition law, Commutative law, Implication law, Double negation law)` to construct 4 different logical equivalence/inequivalence sentences.
## How to interact model in this web page?
Some test examples that you may copy and paste them into the right side user input area.
The expected answer for the following example is they are logically inequivalent which is 0. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is false.
```
If Alice is happy, then Bob is smart.
If Alice is not happy, then Bob is smart.
```
The expected answer for the following example is they are logically equivalent which is 1. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is true.
```
If Alice is happy, then Bob is smart.
If Bob is not smart, then Alice is not happy.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use double negation law `(A <=> not not A)` to show that following example is false.
```
Alice is happy.
Alice is not happy.
```
The expected answer for the following example is they are logically equivalent which is 1. Use double negation law `(A <=> not not A)` to show that following example is true.
```
Alice is happy.
Alice is not sad.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use implication law `(If A then B <=> not A or B)` to show that following example is false. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is kind or Bob is clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use implication law `(If A then B <=> not A or B)` to show that following example is true. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is not kind or Bob is clever.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use commutative law `(A and B <=> B and A)` to show that following example is false.
```
The bald eagle is clever and the wolf is fierce.
The wolf is not fierce and the bald eagle is not clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use commutative law `(A and B <=> B and A)` to show that following example is true.
```
The bald eagle is clever and the wolf is fierce.
The wolf is fierce and the bald eagle is clever.
```
## How to load the model weight?
```
from transformers import AutoModel
model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-2")
```
## Citation
```
@article{bao2023contrastive,
title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
journal={arXiv preprint arXiv:2305.12599},
year={2023}
}
```
|
MetaIX/OpenAssistant-Llama-30b-4bit
|
MetaIX
| 2023-05-28T04:32:45Z | 21 | 69 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-22T23:11:35Z |
<p><strong><font size="5">Information</font></strong></p>
OpenAssistant-Llama-30B-4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
<p>This was made using <a href="https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor">Open Assistant's native fine-tune</a> of Llama 30b on their dataset.</p>
<p><strong>What's included</strong></p>
<P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations</P>
<P>GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.</P>
<p><strong><font size="5">Update 05.27.2023</font></strong></p>
<p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p>
<p><strong><font size="5">Update 04.29.2023</font></strong></p>
<p>Updated to the latest fine-tune by Open Assistant <a href="https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor">oasst-sft-7-llama-30b-xor</a>.</p>
<p><strong>GPU/GPTQ Usage</strong></p>
<p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
<p><strong>CPU/GGML Usage</strong></p>
<p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
<strong>Wikitext2</strong>: 4.964076519012451
<strong>Ptb-New</strong>: 9.641128540039062
<strong>C4-New</strong>: 7.203001022338867
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
<strong>Wikitext2</strong>: 4.641914367675781
<strong>Ptb-New</strong>: 9.117929458618164
<strong>C4-New</strong>: 6.867942810058594
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM.
|
ddoc/sdw
|
ddoc
| 2023-05-28T04:29:28Z | 0 | 1 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-05-28T02:56:00Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
Chudo-chu/SD-t2p-82
|
Chudo-chu
| 2023-05-28T04:02:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T04:02:06Z |
---
license: creativeml-openrail-m
---
|
jonweb37/profitlovy
|
jonweb37
| 2023-05-28T03:40:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-27T16:49:38Z |
---
license: creativeml-openrail-m
---
|
KaiquanMah/q-Taxi-v3-5x5
|
KaiquanMah
| 2023-05-28T03:30:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T03:30:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-5x5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="KaiquanMah/q-Taxi-v3-5x5", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KaiquanMah/q-FrozenLake-v1-4x4-noSlippery
|
KaiquanMah
| 2023-05-28T03:24:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T03:24:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="KaiquanMah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rkamochi/trackerbike
|
rkamochi
| 2023-05-28T03:19:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T03:14:26Z |
---
license: creativeml-openrail-m
---
|
zachary-shah/riffusion-v2
|
zachary-shah
| 2023-05-28T02:38:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-28T02:37:51Z |
Riffusion finetuned on musiccaps dataset
|
gaohuakai/output
|
gaohuakai
| 2023-05-28T02:33:12Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-27T15:16:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0057 | 1.0 | 500 | 1.7462 |
| 1.4179 | 2.0 | 1000 | 1.5558 |
| 0.9094 | 3.0 | 1500 | 1.5667 |
| 0.636 | 4.0 | 2000 | 1.7312 |
| 0.4741 | 5.0 | 2500 | 1.8043 |
| 0.3689 | 6.0 | 3000 | 1.9137 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.8.1+cu101
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mrm8488/starcoder-ft-alpaca-es
|
mrm8488
| 2023-05-28T02:23:23Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-27T21:31:21Z |
---
license: bigcode-openrail-m
tags:
- generated_from_trainer
model-index:
- name: starcoder-ft-alpaca-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-ft-alpaca-es
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.96 | 0.27 | 200 | 0.9572 |
| 0.9584 | 0.55 | 400 | 0.9368 |
| 0.9635 | 0.82 | 600 | 0.9173 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ericalt/ppo-PyramidsRND
|
ericalt
| 2023-05-28T02:17:17Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-05-28T02:17:12Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: ericalt/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
smarquie/ppo-LunarLander-v2
|
smarquie
| 2023-05-28T01:43:44Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T01:43:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.97 +/- 42.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ericalt/ppo-SnowballTarget
|
ericalt
| 2023-05-28T01:38:36Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-28T01:37:49Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: ericalt/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
burningfalls/my-fine-tuned-bert
|
burningfalls
| 2023-05-28T01:18:42Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"en",
"ko",
"dataset:AI-Hub",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-15T13:05:24Z |
---
language:
- en
- ko
license: apache-2.0
datasets: AI-Hub
metrics:
- accuracy
pipeline_tag: text-classification
---
# 1. Introduction
## 1.1 examples

## 1.2 f1-score

---
# 2. Requirements
```python
# my env
python==3.11.3
tensorflow==2.12.0
transformers==4.29.2
# maybe you need to
python>=3.6
tensorflow>=2.0
transformers>=4.0
```
---
# 3. Load
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import TextClassificationPipeline
BERT_PARH = "burningfalls/my-fine-tuned-bert"
def load_bert():
loaded_tokenizer = AutoTokenizer.from_pretrained(BERT_PATH)
loaded_model = TFAutoModelForSequenceClassification.from_pretrained(BERT_PATH)
text_classifier = TextClassificationPipeline(
tokenizer=loaded_tokenizer,
model=loaded_model,
framework='tf',
top_k=1
)
```
---
# 4. Usage
```python
import re
import sentiments
def predict_sentiment(text):
result = text_classifier(text)[0]
feel_idx = int(re.sub(r'[^0-9]', '', result[0]['label']))
feel = sentiments.Feel[feel_idx]["label"]
return feel
```
---
# 5. sentiments.py
```python
Feel = [
{"label": "가난한, 불우한", "index": 0},
{"label": "감사하는", "index": 1},
{"label": "걱정스러운", "index": 2},
{"label": "고립된", "index": 3},
{"label": "괴로워하는", "index": 4},
{"label": "구역질 나는", "index": 5},
{"label": "기쁨", "index": 6},
{"label": "낙담한", "index": 7},
{"label": "남의 시선을 의식하는", "index": 8},
{"label": "노여워하는", "index": 9},
{"label": "눈물이 나는", "index": 10},
{"label": "느긋", "index": 11},
{"label": "당혹스러운", "index": 12},
{"label": "당황", "index": 13},
{"label": "두려운", "index": 14},
{"label": "마비된", "index": 15},
{"label": "만족스러운", "index": 16},
{"label": "방어적인", "index": 17},
{"label": "배신당한", "index": 18},
{"label": "버려진", "index": 19},
{"label": "부끄러운", "index": 20},
{"label": "분노", "index": 21},
{"label": "불안", "index": 22},
{"label": "비통한", "index": 23},
{"label": "상처", "index": 24},
{"label": "성가신", "index": 25},
{"label": "스트레스 받는", "index": 26},
{"label": "슬픔", "index": 27},
{"label": "신뢰하는", "index": 28},
{"label": "신이 난", "index": 29},
{"label": "실망한", "index": 30},
{"label": "악의적인", "index": 31},
{"label": "안달하는", "index": 32},
{"label": "안도", "index": 33},
{"label": "억울한", "index": 34},
{"label": "열등감", "index": 35},
{"label": "염세적인", "index": 36},
{"label": "외로운", "index": 37},
{"label": "우울한", "index": 38},
{"label": "자신하는", "index": 39},
{"label": "조심스러운", "index": 40},
{"label": "좌절한", "index": 41},
{"label": "죄책감의", "index": 42},
{"label": "질투하는", "index": 43},
{"label": "짜증내는", "index": 44},
{"label": "초조한", "index": 45},
{"label": "충격 받은", "index": 46},
{"label": "취약한", "index": 47},
{"label": "툴툴대는", "index": 48},
{"label": "편안한", "index": 49},
{"label": "한심한", "index": 50},
{"label": "혐오스러운", "index": 51},
{"label": "혼란스러운", "index": 52},
{"label": "환멸을 느끼는", "index": 53},
{"label": "회의적인", "index": 54},
{"label": "후회되는", "index": 55},
{"label": "흥분", "index": 56},
{"label": "희생된", "index": 57},
]
```
---
# 6. Reference
* BERT: [klue/bert-base](https://huggingface.co/klue/bert-base)
* Dataset: [AI-Hub 감성 대화 말뭉치](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=86)
|
neatbullshit/a2c-PandaReachDense-v2
|
neatbullshit
| 2023-05-28T00:28:45Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-26T22:29:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.60 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Qt10Classic_256
|
YakovElm
| 2023-05-27T23:26:01Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-27T23:25:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt10Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt10Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2176
- Train Accuracy: 0.9205
- Validation Loss: 0.2088
- Validation Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2773 | 0.9208 | 0.2342 | 0.9416 | 0 |
| 0.2556 | 0.9210 | 0.2074 | 0.9416 | 1 |
| 0.2176 | 0.9205 | 0.2088 | 0.9416 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayan4m1/killer-mike-style
|
ayan4m1
| 2023-05-27T23:06:43Z | 0 | 1 | null |
[
"art",
"text-to-image",
"en",
"license:mit",
"region:us"
] |
text-to-image
| 2023-05-27T19:33:24Z |
---
license: mit
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
Killer Mike (rapper) on Stable Diffusion
This is the `<killer-mike>` concept taught to Stable Diffusion v1.4 via Textual Inversion.
Here is the new concept you will be able to use as a `person`:




|
neatbullshit/poca-SoccerTwos
|
neatbullshit
| 2023-05-27T22:50:33Z | 35 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-05-27T22:48:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: neatbullshit/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
krasnova/church-fine-tuned-van-gogh-256
|
krasnova
| 2023-05-27T22:47:09Z | 4 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-05-27T22:46:18Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('krasnova/church-fine-tuned-van-gogh-256')
image = pipeline().images[0]
image
```
|
potsawee/longformer-large-4096-answering-race
|
potsawee
| 2023-05-27T22:13:04Z | 182 | 16 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"multiple-choice",
"question-answering",
"en",
"dataset:race",
"arxiv:2301.12307",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2023-03-12T16:15:58Z |
---
license: apache-2.0
datasets:
- race
language:
- en
library_name: transformers
pipeline_tag: question-answering
inference: false
---
# longformer-large-4096 fine-tuned to RACE for (Multiple-Choice) Question Answering
- Input: `context`, `question`, `options`
- Output: logit (or probability over the options)
## Model Details
longformer-large-4096 model is fine-tuned to the RACE dataset where the input is a concatenation of ```context + question + option```. We follow the architecture/setup described in https://openreview.net/forum?id=HJgJtT4tvB).
The output is the logit over the options. This is the question answering (QA) component in our [MQAG paper](https://arxiv.org/abs/2301.12307),
or please refer to the GitHub repo of this project: https://github.com/potsawee/mqag0.
## How to Use the Model
Use the code below to get started with the model.
```python
>>> import torch
>>> import numpy as np
>>> from transformers import LongformerTokenizer, LongformerForMultipleChoice
>>> tokenizer = LongformerTokenizer.from_pretrained("potsawee/longformer-large-4096-answering-race")
>>> model = LongformerForMultipleChoice.from_pretrained("potsawee/longformer-large-4096-answering-race")
>>> context = r"""Chelsea's mini-revival continued with a third victory in a row as they consigned struggling Leicester City to a fifth consecutive defeat.
Buoyed by their Champions League win over Borussia Dortmund, Chelsea started brightly and Ben Chilwell volleyed in from a tight angle against his old club.
Chelsea's Joao Felix and Leicester's Kiernan Dewsbury-Hall hit the woodwork in the space of two minutes, then Felix had a goal ruled out by the video assistant referee for offside.
Patson Daka rifled home an excellent equaliser after Ricardo Pereira won the ball off the dawdling Felix outside the box.
But Kai Havertz pounced six minutes into first-half injury time with an excellent dinked finish from Enzo Fernandez's clever aerial ball.
Mykhailo Mudryk thought he had his first goal for the Blues after the break but his effort was disallowed for offside.
Mateo Kovacic sealed the win as he volleyed in from Mudryk's header.
The sliding Foxes, who ended with 10 men following Wout Faes' late dismissal for a second booking, now just sit one point outside the relegation zone.
""".replace('\n', ' ')
>>> question = "Who had a goal ruled out for offside?"
>>> options = ['Ricardo Pereira', 'Ben Chilwell', 'Joao Felix', 'The Foxes']
>>> inputs = prepare_answering_input(
tokenizer=tokenizer, question=question,
options=options, context=context,
)
>>> outputs = model(**inputs)
>>> prob = torch.softmax(outputs.logits, dim=-1)[0].tolist()
>>> selected_answer = options[np.argmax(prob)]
>>> print(prob)
[0.00145158, 0.00460851, 0.99049687, 0.00344302]
>>> print(selected_answer)
Joao Felix
```
where the function that prepare the input to the answering model is:
```python
def prepare_answering_input(
tokenizer, # longformer_tokenizer
question, # str
options, # List[str]
context, # str
max_seq_length=4096,
):
c_plus_q = context + ' ' + tokenizer.bos_token + ' ' + question
c_plus_q_4 = [c_plus_q] * len(options)
tokenized_examples = tokenizer(
c_plus_q_4, options,
max_length=max_seq_length,
padding="longest",
truncation=True,
return_tensors="pt",
)
input_ids = tokenized_examples['input_ids'].unsqueeze(0)
attention_mask = tokenized_examples['attention_mask'].unsqueeze(0)
example_encoded = {
"input_ids": input_ids,
"attention_mask": attention_mask,
}
return example_encoded
```
## Related Models
- Question/Answering Generation ```Context ---> Question + Answer```:
- https://huggingface.co/potsawee/t5-large-generation-race-QuestionAnswer
- https://huggingface.co/potsawee/t5-large-generation-squad-QuestionAnswer
- Distractor (False options) Generation:
- https://huggingface.co/potsawee/t5-large-generation-race-Distractor
## Citation
```bibtex
@article{manakul2023mqag,
title={MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization},
author={Manakul, Potsawee and Liusie, Adian and Gales, Mark JF},
journal={arXiv preprint arXiv:2301.12307},
year={2023}
}
```
|
AustinCarthy/Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75_noaddedB
|
AustinCarthy
| 2023-05-27T21:55:24Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-05-27T14:58:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benign_40K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0196
- Accuracy: 0.9974
- F1: 0.9714
- Precision: 0.9994
- Recall: 0.945
- Roc Auc Score: 0.9725
- Tpr At Fpr 0.01: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0046 | 1.0 | 66565 | 0.0199 | 0.9965 | 0.9616 | 0.9970 | 0.9286 | 0.9642 | 0.9174 |
| 0.0098 | 2.0 | 133130 | 0.0314 | 0.9952 | 0.9477 | 0.9883 | 0.9104 | 0.9549 | 0.8002 |
| 0.0015 | 3.0 | 199695 | 0.0132 | 0.9976 | 0.9737 | 0.9981 | 0.9504 | 0.9752 | 0.9528 |
| 0.0017 | 4.0 | 266260 | 0.0164 | 0.9975 | 0.9731 | 0.9987 | 0.9488 | 0.9744 | 0.9536 |
| 0.0013 | 5.0 | 332825 | 0.0196 | 0.9974 | 0.9714 | 0.9994 | 0.945 | 0.9725 | 0.9556 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
melisssy/distilbert-base-uncased-finetuned-squad-d5716d28
|
melisssy
| 2023-05-27T21:43:35Z | 0 | 0 | null |
[
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2023-05-27T21:43:25Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jayanthspratap/vit-base-patch16-224-drfx-CT-classifier
|
jayanthspratap
| 2023-05-27T21:43:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T21:38:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-drfx-CT-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7058823529411765
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-drfx-CT-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6281
- Accuracy: 0.7059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.7020 | 0.5294 |
| No log | 2.0 | 8 | 0.6686 | 0.6471 |
| 0.7085 | 3.0 | 12 | 0.6509 | 0.5882 |
| 0.7085 | 4.0 | 16 | 0.6336 | 0.6471 |
| 0.6847 | 5.0 | 20 | 0.6281 | 0.7059 |
| 0.6847 | 6.0 | 24 | 0.6256 | 0.7059 |
| 0.6847 | 7.0 | 28 | 0.6229 | 0.7059 |
| 0.6814 | 8.0 | 32 | 0.6218 | 0.7059 |
| 0.6814 | 9.0 | 36 | 0.6214 | 0.7059 |
| 0.6717 | 10.0 | 40 | 0.6213 | 0.7059 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lora-x/backpack-gpt2
|
lora-x
| 2023-05-27T21:41:49Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-generation-inference",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-27T20:19:13Z |
---
pipeline_tag: text-generation
tags:
- text-generation-inference
library_name: transformers
---
|
jayanthspratap/vit-base-patch16-224-drfx-surgery-classifier
|
jayanthspratap
| 2023-05-27T21:35:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T21:29:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-drfx-surgery-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-drfx-surgery-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3696
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.5814 | 0.875 |
| No log | 2.0 | 8 | 0.4649 | 0.875 |
| 0.6165 | 3.0 | 12 | 0.4166 | 0.875 |
| 0.6165 | 4.0 | 16 | 0.3897 | 0.875 |
| 0.4482 | 5.0 | 20 | 0.3811 | 0.875 |
| 0.4482 | 6.0 | 24 | 0.3754 | 0.875 |
| 0.4482 | 7.0 | 28 | 0.3724 | 0.875 |
| 0.4023 | 8.0 | 32 | 0.3707 | 0.875 |
| 0.4023 | 9.0 | 36 | 0.3698 | 0.875 |
| 0.4415 | 10.0 | 40 | 0.3696 | 0.875 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jayanthspratap/resnet-50-drfx-surgery-classifier
|
jayanthspratap
| 2023-05-27T21:10:48Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T21:09:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-drfx-surgery-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-drfx-surgery-classifier
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6399
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.6591 | 0.8125 |
| No log | 2.0 | 8 | 0.6399 | 0.875 |
| 0.6638 | 3.0 | 12 | 0.6671 | 0.875 |
| 0.6638 | 4.0 | 16 | 0.6645 | 0.8125 |
| 0.6562 | 5.0 | 20 | 0.6495 | 0.875 |
| 0.6562 | 6.0 | 24 | 0.6673 | 0.875 |
| 0.6562 | 7.0 | 28 | 0.6539 | 0.875 |
| 0.6527 | 8.0 | 32 | 0.6519 | 0.875 |
| 0.6527 | 9.0 | 36 | 0.6603 | 0.875 |
| 0.6596 | 10.0 | 40 | 0.6525 | 0.875 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jayanthspratap/convnext-tiny-224-drfx-surgery-classifier
|
jayanthspratap
| 2023-05-27T21:07:01Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T21:01:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-drfx-surgery-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-drfx-surgery-classifier
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6160
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.7140 | 0.375 |
| No log | 2.0 | 8 | 0.6876 | 0.5 |
| 0.7104 | 3.0 | 12 | 0.6666 | 0.625 |
| 0.7104 | 4.0 | 16 | 0.6495 | 0.6875 |
| 0.6567 | 5.0 | 20 | 0.6360 | 0.75 |
| 0.6567 | 6.0 | 24 | 0.6247 | 0.8125 |
| 0.6567 | 7.0 | 28 | 0.6160 | 0.875 |
| 0.6277 | 8.0 | 32 | 0.6098 | 0.875 |
| 0.6277 | 9.0 | 36 | 0.6058 | 0.875 |
| 0.6122 | 10.0 | 40 | 0.6043 | 0.875 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jayanthspratap/resnet-50-drfx-CT-classifier
|
jayanthspratap
| 2023-05-27T20:54:09Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T20:45:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-drfx-CT-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7647058823529411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-drfx-CT-classifier
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6813
- Accuracy: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.6770 | 0.6471 |
| No log | 2.0 | 8 | 0.6813 | 0.7647 |
| 0.6847 | 3.0 | 12 | 0.6777 | 0.7059 |
| 0.6847 | 4.0 | 16 | 0.6819 | 0.7059 |
| 0.6886 | 5.0 | 20 | 0.6842 | 0.6471 |
| 0.6886 | 6.0 | 24 | 0.6806 | 0.7059 |
| 0.6886 | 7.0 | 28 | 0.6765 | 0.7059 |
| 0.6865 | 8.0 | 32 | 0.6807 | 0.7647 |
| 0.6865 | 9.0 | 36 | 0.6822 | 0.6471 |
| 0.6848 | 10.0 | 40 | 0.6832 | 0.5882 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jayanthspratap/convnext-tiny-224-drfx-CT-classifier
|
jayanthspratap
| 2023-05-27T20:29:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-27T20:19:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-drfx-CT-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7647058823529411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-drfx-CT-classifier
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6600
- Accuracy: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.6600 | 0.7647 |
| No log | 2.0 | 8 | 0.7135 | 0.5882 |
| 0.4055 | 3.0 | 12 | 0.6851 | 0.6471 |
| 0.4055 | 4.0 | 16 | 0.6512 | 0.6471 |
| 0.3895 | 5.0 | 20 | 0.7131 | 0.6471 |
| 0.3895 | 6.0 | 24 | 0.7158 | 0.6471 |
| 0.3895 | 7.0 | 28 | 0.6533 | 0.7647 |
| 0.363 | 8.0 | 32 | 0.6211 | 0.7059 |
| 0.363 | 9.0 | 36 | 0.6222 | 0.7647 |
| 0.3207 | 10.0 | 40 | 0.6255 | 0.7647 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
4bit/Replit-v1-CodeInstruct-3B
|
4bit
| 2023-05-27T20:16:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"code",
"instruct",
"self instruct",
"custom_code",
"dataset:bigcode/the-stack-dedup",
"dataset:sahil2801/CodeAlpaca-20k",
"dataset:teknium/GPTeacher-CodeInstruct",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-27T20:11:12Z |
---
license: cc-by-sa-4.0
datasets:
- bigcode/the-stack-dedup
- sahil2801/CodeAlpaca-20k
- teknium/GPTeacher-CodeInstruct
model-base:
- replit/replit-code-v1-3b
tags:
- code
- instruct
- self instruct
language:
- code
programming_language:
- Markdown
- Java
- JavaScript
- Python
- TypeScript
- PHP
- SQL
- JSX
- reStructuredText
- Rust
- C
- CSS
- Go
- C++
- HTML
- Vue
- Ruby
- Jupyter Notebook
- R
- Shell
---
Base Model: replit/replit-code-v1-3b
This model is fine tuned on both Sahil2801's CodeAlpaca & Teknium's GPTeacher Code-Instruct to give Replit's Code model instruct capabilities.
Try this model on it's HuggingFace demo Spaces: https://huggingface.co/spaces/teknium/Replit-v1-CodeInstruct-3B
Dataset links:
CodeAlpaca: https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k
GPTeacher subset - Code Instruct: https://github.com/teknium1/GPTeacher
This model was trained on 2x a100 80gb for 1 hour on ~25,000 code instruction/response pairs in Alpaca format.
Refer to the base models HuggingFace model card for some basic requirements to run: https://huggingface.co/replit/replit-code-v1-3b
This fine tune can be prompted like any alpaca fine tune:
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
```
or
```
### Instruction:
<prompt>
### Response:
```
This model seems to have issues with device="auto" in the model arguments (and requires the trust_remote_code=True, so you should maybe load it like I am here:
```
self.tokenizer = AutoTokenizer.from_pretrained("./Replit-CodeInstruct/", trust_remote_code=True)
self.model = AutoModelForCausalLM.from_pretrained(
"./Replit-CodeInstruct",
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
self.model.to('cuda')
```
This model for me produced coherent outputs with the following sampler settings, but feel free to experiment:
```
max_new_tokens=128, do_sample=True, use_cache=True, temperature=0.2, top_p=0.9, eos_token_id= self.tokenizer.eos_token_id
```
In the tokenizer decode arguments, it also needs these settings:
```
skip_special_tokens=True, clean_up_tokenization_space=False
```
The following parameters were used with HuggingFace trainer to train the model with:
```
--model_name_or_path replit/replit-code-v1-3b --data_path /root/stanford_alpaca/train.json --bf16 True --output_dir /root/stanford_alpaca/model_ckpts --num_train_epochs 3 --per_device_train_batch_size 4 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --save_strategy steps --save_steps 200 --save_total_limit 3 --learning_rate 1e-5 --weight_decay 0. --warmup_ratio 0.03 --tf32 True --run_name Replit1
```
|
darkblack/SXPOST
|
darkblack
| 2023-05-27T20:10:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-27T19:57:55Z |
---
license: creativeml-openrail-m
---
|
Zhakky96/Sukmalora
|
Zhakky96
| 2023-05-27T19:47:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-27T19:42:55Z |
---
license: creativeml-openrail-m
---
|
YakovElm/Hyperledger5Classic_512
|
YakovElm
| 2023-05-27T19:38:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-27T19:37:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger5Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger5Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3034
- Train Accuracy: 0.8744
- Validation Loss: 0.4265
- Validation Accuracy: 0.8185
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4068 | 0.8537 | 0.4270 | 0.8361 | 0 |
| 0.3760 | 0.8537 | 0.4053 | 0.8361 | 1 |
| 0.3034 | 0.8744 | 0.4265 | 0.8185 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.