modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 12:31:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 12:30:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
songyulong/VAE
|
songyulong
| 2023-06-20T03:04:21Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-20T02:55:45Z |
---
license: bigscience-openrail-m
---
|
mirav/SeleneArtistic
|
mirav
| 2023-06-20T03:01:19Z | 0 | 1 | null |
[
"stable diffusion",
"text-to-image",
"en",
"dataset:mirav/artistic-imagery",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-12T01:51:00Z |
---
license: creativeml-openrail-m
datasets:
- mirav/artistic-imagery
language:
- en
pipeline_tag: text-to-image
tags:
- stable diffusion
---
## Description
Stable Diffusion 1.5 model finetuned on a subset of [mirav/artistic-imagery](https://huggingface.co/datasets/mirav/artistic-imagery). Still a work in progress.<br>
## Goals
Providing a model that can produce a wide variety of styles and is highly responsive to certain traditional art terms.
Current trained terms (as of version 1.0):
* watercolor \(medium\)
* watercolor pencil \(medium\)
* sketch
* traditional media
* ink wash painting
* impressionism
* acrylic painting
* oil painting
* chiaroscuro<sup>1</sup>
## Notes
<sup>1</sup> Due to documented issues with the noise scheduler, this does not presently have quite the intended effect.
|
GralchemOz/guanaco-33b-chinese
|
GralchemOz
| 2023-06-20T02:40:43Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T07:25:18Z |
---
license: apache-2.0
---
This model is a merged version of [guanaco-33b](https://huggingface.co/timdettmers/guanaco-33b ) and [chinese-alpaca-lora-33b](https://huggingface.co/ziqingyang/chinese-alpaca-lora-33b) ,which enhances the Chinese language capability while retaining the abilities of the original models.
Please follow the corresponding model licenses when using this model.
本模型是由[guanaco-33b](https://huggingface.co/timdettmers/guanaco-33b ) 和 [chinese-alpaca-lora-33b](https://huggingface.co/ziqingyang/chinese-alpaca-lora-33b) 合并得到的, 增强中文能力的同时保留了原始模型的能力
使用时务必遵守相应模型的协议
|
joeyc/ppo-LunarLander-v2
|
joeyc
| 2023-06-20T02:06:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T01:59:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.53 +/- 18.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wuster/bloomz-1b-lora-tagger
|
wuster
| 2023-06-20T02:02:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T02:02:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
pcalhoun/gpt-j-6b-8bit-pun-generator
|
pcalhoun
| 2023-06-20T01:56:44Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-03-08T22:56:51Z |
---
license: apache-2.0
---
*["This is the moment I've been training for," said the pun-generating AI](https://paulcalhoun.substack.com/p/this-is-the-moment-ive-been-training)*
**Note:** At the time this was created, HF didn't support running models finetuned in 8-bit, so in the included python example the transformers module gets patched before the model is loaded directly via torch. Also, make sure you use the package versions specifically listed in requirements.txt. At least the bnb and transformer versions need to match what's there.
# In 2022 Robert Gonsalves [demonstrated](https://towardsdatascience.com/i-once-trained-an-ai-to-rhyme-and-it-took-gpt-j-a-long-time-de1f98925e17) that GPT-J-6B could be fine tuned for limerick generation. This is an interesting data point, historically speaking, for a few reasons:
* GPT-J-6B was over a year old when this happened
* It’s ~50x smaller than GPT3
* Generating coherent and amusing jokes [is considered computationally difficult](https://hdsr.mitpress.mit.edu/pub/wi9yky5c/release/3)
* Note: Google’s PaLM LLM [already managed this task](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), albeit at 100x scale
* Robert Gonsalves did this as a fun personal project, using readily available cloud tools
# I’m currently trying to fine tune the same model to make puns. Some unique (I think) output examples so far:
* **Two guys argued about a painting. There was a rupture in the peace.**
* Peace => Piece (painting)
* **When the townspeople found out the cow was giving birth, it was quite a cow to have to deal with.**
* I like this one because it’s still a pun, despite not being remotely funny.
* **A musician went to the hospital because he swallowed a trombone. The doctor told him to have a tube inserted and he would be playing on his own soon.**
* This is a mediocre pun, but the setup requires a large amount of real-world knowledge.
* **Two electricians had such different tastes, they went to a waffle outlet for a discussion.**
* This one appears to be a double-pun (electricians => outlet, and waffle-food => waffle-to change opinions)
* **“I love kiwis,” said Tom kiwwisely.**
* They’re not all zingers.
* **To be able to go back to boarding school and pass all her subjects meant that she had learnt her lesson.**
* So much worldbuilding for such an anticlimactic payoff.
* **The story of a boy who was born with one eye in the wrong place was told from an unexpected angle.**
* This one is probably the most impressive to date, after ~12000 fine tuning steps (and poring through maybe 800 non-pun or unfunny pun inferences).
* **Old pianists never die they just get tuned away.**
* This format (“Old [specialist]s never die, they just [death euphemism]”) is featured many times in the training data. However, the above pun is not on Google anywhere, so I assume it’s new.
* **I like to have a fire lit in my chimney, said Tom light-heartedly.**
* Heart=>Hearth
* **Old gardeners never die they just turn green**
* **He didn't wear his house shoes to work because he's such a homeboy.**
* **Old mathematicians never die, they just have to multiply.**
* **A young man sitting at a table with a pot of stew was very busy keeping a lid on his appetite.**
* **Drumlines are always being beat up.**
* **"There's no shortage of water," said Tom rationally.**
* Water rations.
* **My new job as a gem cutter is fascinating because I am so deeply engaging.**
* Gems => engagement rings.
|
Hokkaiswimming/autotrain-sessya06201-68135137237
|
Hokkaiswimming
| 2023-06-20T01:56:15Z | 183 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:Hokkaiswimming/autotrain-data-sessya06201",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-20T01:55:48Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- Hokkaiswimming/autotrain-data-sessya06201
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.0573190927493014
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 68135137237
- CO2 Emissions (in grams): 0.0573
## Validation Metrics
- Loss: 0.007
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
Globee/Sarahviroid
|
Globee
| 2023-06-20T01:46:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T12:13:15Z |
---
license: creativeml-openrail-m
---
|
Brandulio/Reinforce-Pixelcopter
|
Brandulio
| 2023-06-20T01:29:14Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T01:29:10Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.10 +/- 43.88
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
wiliest-closure0u/ppo-LunarLander-v2
|
wiliest-closure0u
| 2023-06-20T01:17:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T01:17:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.82 +/- 17.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VIMA/VIMA
|
VIMA
| 2023-06-20T01:10:42Z | 0 | 14 | null |
[
"arxiv:1810.03993",
"arxiv:1912.10389",
"arxiv:2210.03094",
"license:mit",
"region:us"
] | null | 2022-10-05T22:40:02Z |
---
license: mit
---
# Model Card
Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/abs/1912.10389), we’re providing some accompanying information about the VIMA model.
## Model Details
VIMA (**Vi**suo**M**otor **A**ttention) is a novel Transformer agent that ingests multimodal prompts and outputs robot arm control autoregressively. VIMA is developed primarily by researchers at Stanford/NVIDIA.
### Model Date
October 2022
### Model Type
VIMA model consists of a pretrained T5 model as the prompt encoder, several tokenizers to process multimodal inputs, and a causal decoder that augoregressively predicts actions given the prompt and interaction history.
### Model Versions
We released 7 checkpoints covering a spectrum of model capacity from 2M to 200M.
## Model Use
### Intended Use
The model is intended to be used alongside [VIMA-Bench](https://github.com/vimalabs/VimaBench) to study general robot manipulation with multimodal prompts.
### Primary intended uses
The primary intended users of these models are AI researchers in robotics, multimodal learning, embodied agents, foundation models, etc.
## Data
The models were trained with [data](https://doi.org/10.5281/zenodo.7127587) generated by oracles implemented in [VIMA-Bench](https://github.com/vimalabs/VimaBench). It includes 650K successful trajectories for behavior cloning. We use 600K trajectories for training. The remaining 50K trajectories are held out for validation purpose.
## Performance and Limitations
### Metrics and Performance
We quantify the performance of trained models using task success percentage aggregated over multiple tasks. We evaluate models' performance on task suite from [VIMA-Bench](https://github.com/vimalabs/VimaBench) and follow the proposed evaluation protocol. See our paper for more details.
### Limitations
Our provided model checkpoints are pre-trained on VIMA-Bench, which may not directly generalize to other simulators or real world. Limitations are further discussed in the paper.
## Paper and Citation
Our paper is posted on [arXiv](https://arxiv.org/abs/2210.03094). If you find our work useful, please consider citing us!
```bibtex
@inproceedings{jiang2023vima,
title = {VIMA: General Robot Manipulation with Multimodal Prompts},
author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan},
booktitle = {Fortieth International Conference on Machine Learning},
year = {2023}
}
```
|
Densu341/Bugiene_model
|
Densu341
| 2023-06-20T00:53:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-20T00:48:09Z |
# Bugiene Machine Learning Model
This repository contains the code for a machine learning model that performs implied computer vision tasks using a Convolutional Neural Network (CNN). The machine learning model was developed to classify fruits into rotten or fresh categories using computer vision and a convolutional neural network (CNN). The model was built using the TensorFlow Keras library. In this case, the CNN model was trained on a dataset of images of fruits, such as: apples, avocados, bananas, grapes, guavas, oranges. The dataset included images of both rotten and fresh fruits. The CNN was able to learn to distinguish between the two types of fruit based on the features it extracted from the images.The output from our model is prediction result "Fresh" or "Rotten" and the accuracy.
## Table of Contents
- [Dataset](#dataset)
- [Model Architecture](#model-architecture)
- [Requirements](#requirements)
- [Usage](#usage)
- [Results](#results)
- [Contributor Acknowledgment](#contributor-acknowledgment)
## Dataset
The model is trained on a custom dataset consisting of labeled images. The dataset can be obtained from https://www.kaggle.com/datasets/sriramr/fruits-fresh-and-rotten-for-classification and https://www.kaggle.com/datasets/moltean/fruits.
## Model Architecture
The CNN model architecture used for this project is as follows:
pre_trained_model = VGG16(input_shape=(150,150,3),
include_top=False)
for layer in pre_trained_model.layers:
layer.trainable = False
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(1, activation='sigmoid')(x)
## Requirements
To run the code in this repository, you will need the following dependencies:
- Python [3.10.10]
- TensorFlow [2.12.0]
## Usage
1. Clone this repository to your local machine.
2. Install the required dependencies by running `pip install -r requirements.txt`.
3. Clone this https://github.com/Bugiene/Bugiene-app/blob/master/machine-learning/bugiene_model.ipynb to your local machine.
4. Run the main script using `python main.py`.
## Results
Found 4004 images belonging to 2 classes.
4004/4004 [==============================] - 41s 10ms/step - loss: 0.1405 - accuracy: 0.9451
accuracy test: 0.9450549483299255
loss test: 0.14052222669124603
And for the result test you can check this https://github.com/Bugiene/Bugiene-app/tree/master/machine-learning/result-test
## Contributor Acknowledgment
We would like to acknowledge the following contributors for their valuable contributions to this project:
- Deni Irawan (GitHub: Densu341)
- Sandro Sinaga (GitHub: SandroSinaga24)
- Laila Nur Anggamurti (GitHub: jejukyul)
## Contact
For any questions or inquiries, please contact the contributor mentioned above. Thank you.
|
echrisantus/Reinforce-v1
|
echrisantus
| 2023-06-20T00:46:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T00:46:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
memotirre90/Equipo16_FineTunning_Amazon_Comments
|
memotirre90
| 2023-06-20T00:40:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T06:26:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Equipo16_FineTunning_Amazon_Comments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Equipo16_FineTunning_Amazon_Comments
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2751
- Accuracy: 0.9093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
twidfeel/distilbert-base-uncased-distilled-clinc
|
twidfeel
| 2023-06-20T00:25:44Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T00:15:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9470967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9829 | 1.0 | 318 | 1.3786 | 0.7284 |
| 1.0665 | 2.0 | 636 | 0.6878 | 0.8642 |
| 0.5642 | 3.0 | 954 | 0.4058 | 0.9126 |
| 0.3514 | 4.0 | 1272 | 0.3042 | 0.9339 |
| 0.2656 | 5.0 | 1590 | 0.2701 | 0.94 |
| 0.2305 | 6.0 | 1908 | 0.2532 | 0.9442 |
| 0.2131 | 7.0 | 2226 | 0.2462 | 0.9458 |
| 0.2031 | 8.0 | 2544 | 0.2409 | 0.9471 |
| 0.1975 | 9.0 | 2862 | 0.2401 | 0.9461 |
| 0.1953 | 10.0 | 3180 | 0.2389 | 0.9471 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
xycon/Ralora
|
xycon
| 2023-06-20T00:09:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T00:09:16Z |
---
license: creativeml-openrail-m
---
|
dtntxt/ppo-LunarLander-v2
|
dtntxt
| 2023-06-20T00:06:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-12T00:31:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.15 +/- 19.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jhonny1998/Sentimientos
|
Jhonny1998
| 2023-06-19T23:56:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T23:53:06Z |
---
license: apache-2.0
---
import json
import requests
API_TOKEN = ""
def query(payload='',parameters=None,options={'use_cache': False}):
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
body = {"inputs":payload,'parameters':parameters,'options':options}
response = requests.request("POST", API_URL, headers=headers, data= json.dumps(body))
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
return "Error:"+" ".join(response.json()['error'])
else:
return response.json()[0]['generated_text']
parameters = {
'max_new_tokens':25, # number of generated tokens
'temperature': 0.5, # controlling the randomness of generations
'end_sequence': "###" # stopping sequence for generation
}
prompt="...." # few-shot prompt
data = query(prompt,parameters,options)
|
husienburgir/Rintest
|
husienburgir
| 2023-06-19T23:44:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T23:39:34Z |
---
license: creativeml-openrail-m
---
|
AustinCarthy/MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-19T23:34:55Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T21:18:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_benign_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0286
- Accuracy: 0.9964
- F1: 0.9612
- Precision: 0.9728
- Recall: 0.95
- Roc Auc Score: 0.9743
- Tpr At Fpr 0.01: 0.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0218 | 1.0 | 22121 | 0.0193 | 0.9952 | 0.9485 | 0.9717 | 0.9264 | 0.9625 | 0.7698 |
| 0.013 | 2.0 | 44242 | 0.0213 | 0.9957 | 0.9546 | 0.9675 | 0.942 | 0.9702 | 0.799 |
| 0.0041 | 3.0 | 66363 | 0.0262 | 0.9951 | 0.9494 | 0.9395 | 0.9596 | 0.9783 | 0.792 |
| 0.0034 | 4.0 | 88484 | 0.0223 | 0.9964 | 0.9618 | 0.9657 | 0.958 | 0.9781 | 0.8558 |
| 0.001 | 5.0 | 110605 | 0.0286 | 0.9964 | 0.9612 | 0.9728 | 0.95 | 0.9743 | 0.7924 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MindNetML/ppo-LunarLander-v2
|
MindNetML
| 2023-06-19T23:07:39Z | 1 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:07:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.22 +/- 28.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aphi/dqn-SpaceInvadersNoFrameskip-v4_1
|
aphi
| 2023-06-19T23:07:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:06:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 330.50 +/- 71.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aphi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aphi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aphi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
C-Lo/finetuning-sentiment-gendered-dataset
|
C-Lo
| 2023-06-19T22:58:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:55:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-gendered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-gendered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sid/ppo-Huggy
|
sid
| 2023-06-19T22:53:24Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-19T22:52:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sid/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MarketingHHM/autotrain-hhmqatest23-68104137216
|
MarketingHHM
| 2023-06-19T22:52:12Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:MarketingHHM/autotrain-data-hhmqatest23",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-19T22:31:26Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- MarketingHHM/autotrain-data-hhmqatest23
co2_eq_emissions:
emissions: 14.037553452269616
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 68104137216
- CO2 Emissions (in grams): 14.0376
## Validation Metrics
- Loss: 0.920
- Rouge1: 34.783
- Rouge2: 23.625
- RougeL: 29.390
- RougeLsum: 32.868
- Gen Len: 109.840
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/MarketingHHM/autotrain-hhmqatest23-68104137216
```
|
gokuls/hbertv1-Massive-intent_w_in
|
gokuls
| 2023-06-19T22:35:13Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:26:09Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent_w_in
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8745696015740285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent_w_in
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7790
- Accuracy: 0.8746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2877 | 1.0 | 180 | 0.9877 | 0.7329 |
| 0.8514 | 2.0 | 360 | 0.7403 | 0.7993 |
| 0.5896 | 3.0 | 540 | 0.6955 | 0.8239 |
| 0.4058 | 4.0 | 720 | 0.6778 | 0.8313 |
| 0.3003 | 5.0 | 900 | 0.6345 | 0.8505 |
| 0.2236 | 6.0 | 1080 | 0.6567 | 0.8583 |
| 0.1615 | 7.0 | 1260 | 0.7163 | 0.8460 |
| 0.1159 | 8.0 | 1440 | 0.7450 | 0.8519 |
| 0.0976 | 9.0 | 1620 | 0.7533 | 0.8490 |
| 0.061 | 10.0 | 1800 | 0.7502 | 0.8642 |
| 0.0438 | 11.0 | 1980 | 0.7729 | 0.8618 |
| 0.0309 | 12.0 | 2160 | 0.7790 | 0.8746 |
| 0.0191 | 13.0 | 2340 | 0.8302 | 0.8682 |
| 0.0101 | 14.0 | 2520 | 0.8224 | 0.8721 |
| 0.0057 | 15.0 | 2700 | 0.8229 | 0.8716 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Wazzzabeee/PoliteT5Base
|
Wazzzabeee
| 2023-06-19T22:29:16Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T19:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: PoliteT5Base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PoliteT5Base
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8536
- Toxicity Ratio: 0.3421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Toxicity Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|
| No log | 1.0 | 22 | 1.3256 | 0.3070 |
| No log | 2.0 | 44 | 0.8436 | 0.2982 |
| 1.6337 | 3.0 | 66 | 0.7944 | 0.3333 |
| 1.6337 | 4.0 | 88 | 0.8921 | 0.3158 |
| 0.547 | 5.0 | 110 | 0.9630 | 0.2632 |
| 0.547 | 6.0 | 132 | 0.9711 | 0.3158 |
| 0.3279 | 7.0 | 154 | 0.9966 | 0.3070 |
| 0.3279 | 8.0 | 176 | 1.0053 | 0.3246 |
| 0.3279 | 9.0 | 198 | 1.0326 | 0.3333 |
| 0.2282 | 10.0 | 220 | 0.9798 | 0.3158 |
| 0.2282 | 11.0 | 242 | 1.0093 | 0.3333 |
| 0.1837 | 12.0 | 264 | 1.2380 | 0.3246 |
| 0.1837 | 13.0 | 286 | 1.1889 | 0.3860 |
| 0.1546 | 14.0 | 308 | 1.1985 | 0.3596 |
| 0.1546 | 15.0 | 330 | 1.2296 | 0.3509 |
| 0.1178 | 16.0 | 352 | 1.1394 | 0.3684 |
| 0.1178 | 17.0 | 374 | 1.1712 | 0.3596 |
| 0.1178 | 18.0 | 396 | 1.1586 | 0.4035 |
| 0.1185 | 19.0 | 418 | 1.9263 | 0.0789 |
| 0.1185 | 20.0 | 440 | 1.3483 | 0.3246 |
| 0.2332 | 21.0 | 462 | 1.3163 | 0.3158 |
| 0.2332 | 22.0 | 484 | 1.2926 | 0.3509 |
| 0.1267 | 23.0 | 506 | 1.2691 | 0.3421 |
| 0.1267 | 24.0 | 528 | 1.3298 | 0.3596 |
| 0.0879 | 25.0 | 550 | 1.2795 | 0.3509 |
| 0.0879 | 26.0 | 572 | 1.2826 | 0.3246 |
| 0.0879 | 27.0 | 594 | 1.2884 | 0.3158 |
| 0.0747 | 28.0 | 616 | 1.4146 | 0.4035 |
| 0.0747 | 29.0 | 638 | 1.3577 | 0.3596 |
| 0.0714 | 30.0 | 660 | 1.2663 | 0.3509 |
| 0.0714 | 31.0 | 682 | 1.2508 | 0.3772 |
| 0.0566 | 32.0 | 704 | 1.3980 | 0.4035 |
| 0.0566 | 33.0 | 726 | 1.4006 | 0.3860 |
| 0.0566 | 34.0 | 748 | 1.4090 | 0.3596 |
| 0.0572 | 35.0 | 770 | 1.4681 | 0.3246 |
| 0.0572 | 36.0 | 792 | 1.4254 | 0.3947 |
| 0.0456 | 37.0 | 814 | 1.4932 | 0.3246 |
| 0.0456 | 38.0 | 836 | 1.3994 | 0.2982 |
| 0.0385 | 39.0 | 858 | 1.4511 | 0.3421 |
| 0.0385 | 40.0 | 880 | 1.3007 | 0.3684 |
| 0.0223 | 41.0 | 902 | 1.3961 | 0.3158 |
| 0.0223 | 42.0 | 924 | 1.4619 | 0.3246 |
| 0.0223 | 43.0 | 946 | 1.3996 | 0.3246 |
| 0.0199 | 44.0 | 968 | 1.5012 | 0.3509 |
| 0.0199 | 45.0 | 990 | 1.4104 | 0.3246 |
| 0.018 | 46.0 | 1012 | 1.5855 | 0.3333 |
| 0.018 | 47.0 | 1034 | 1.4603 | 0.3333 |
| 0.0146 | 48.0 | 1056 | 1.5335 | 0.3421 |
| 0.0146 | 49.0 | 1078 | 1.4883 | 0.3772 |
| 0.0131 | 50.0 | 1100 | 1.5366 | 0.2982 |
| 0.0131 | 51.0 | 1122 | 1.5762 | 0.3509 |
| 0.0131 | 52.0 | 1144 | 1.5434 | 0.3333 |
| 0.0073 | 53.0 | 1166 | 1.4730 | 0.3158 |
| 0.0073 | 54.0 | 1188 | 1.5133 | 0.3509 |
| 0.0049 | 55.0 | 1210 | 1.6912 | 0.3509 |
| 0.0049 | 56.0 | 1232 | 1.6376 | 0.3509 |
| 0.0028 | 57.0 | 1254 | 1.8260 | 0.3509 |
| 0.0028 | 58.0 | 1276 | 1.5748 | 0.3509 |
| 0.0028 | 59.0 | 1298 | 1.6631 | 0.3509 |
| 0.0029 | 60.0 | 1320 | 1.7458 | 0.3509 |
| 0.0029 | 61.0 | 1342 | 1.6343 | 0.3684 |
| 0.002 | 62.0 | 1364 | 1.6433 | 0.3421 |
| 0.002 | 63.0 | 1386 | 1.7486 | 0.3509 |
| 0.0014 | 64.0 | 1408 | 1.8081 | 0.3684 |
| 0.0014 | 65.0 | 1430 | 1.8987 | 0.3947 |
| 0.0007 | 66.0 | 1452 | 1.8811 | 0.3596 |
| 0.0007 | 67.0 | 1474 | 1.8541 | 0.3596 |
| 0.0007 | 68.0 | 1496 | 1.8233 | 0.3509 |
| 0.001 | 69.0 | 1518 | 1.7747 | 0.3509 |
| 0.001 | 70.0 | 1540 | 1.8105 | 0.3509 |
| 0.0008 | 71.0 | 1562 | 1.8254 | 0.3596 |
| 0.0008 | 72.0 | 1584 | 1.8444 | 0.3684 |
| 0.0008 | 73.0 | 1606 | 1.8387 | 0.3509 |
| 0.0008 | 74.0 | 1628 | 1.8501 | 0.3509 |
| 0.0004 | 75.0 | 1650 | 1.8536 | 0.3421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
gokuls/hbertv1-Massive-intent_48
|
gokuls
| 2023-06-19T22:21:18Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:12:24Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8573536645351697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent_48
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8740
- Accuracy: 0.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4348 | 1.0 | 180 | 1.2038 | 0.6798 |
| 1.0006 | 2.0 | 360 | 0.8063 | 0.7831 |
| 0.6914 | 3.0 | 540 | 0.7823 | 0.7924 |
| 0.5 | 4.0 | 720 | 0.8175 | 0.7959 |
| 0.3877 | 5.0 | 900 | 0.7489 | 0.8239 |
| 0.2981 | 6.0 | 1080 | 0.7043 | 0.8446 |
| 0.2251 | 7.0 | 1260 | 0.7596 | 0.8372 |
| 0.181 | 8.0 | 1440 | 0.8237 | 0.8357 |
| 0.1367 | 9.0 | 1620 | 0.8323 | 0.8362 |
| 0.0995 | 10.0 | 1800 | 0.8589 | 0.8396 |
| 0.0726 | 11.0 | 1980 | 0.8476 | 0.8510 |
| 0.0501 | 12.0 | 2160 | 0.8901 | 0.8534 |
| 0.0338 | 13.0 | 2340 | 0.8992 | 0.8519 |
| 0.022 | 14.0 | 2520 | 0.8740 | 0.8574 |
| 0.0124 | 15.0 | 2700 | 0.8828 | 0.8554 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rekhari/dummy-model
|
rekhari
| 2023-06-19T22:21:00Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-19T22:20:48Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/gpt2_left_out_qed
|
NasimB
| 2023-06-19T22:18:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T13:06:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_qed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_qed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9695 | 0.27 | 500 | 5.0679 |
| 4.7417 | 0.53 | 1000 | 4.6811 |
| 4.4136 | 0.8 | 1500 | 4.4369 |
| 4.2076 | 1.06 | 2000 | 4.2985 |
| 4.0279 | 1.33 | 2500 | 4.2048 |
| 3.9505 | 1.59 | 3000 | 4.1137 |
| 3.8781 | 1.86 | 3500 | 4.0482 |
| 3.7338 | 2.12 | 4000 | 4.0046 |
| 3.6392 | 2.39 | 4500 | 3.9628 |
| 3.6228 | 2.65 | 5000 | 3.9115 |
| 3.5944 | 2.92 | 5500 | 3.8738 |
| 3.4222 | 3.18 | 6000 | 3.8797 |
| 3.3836 | 3.45 | 6500 | 3.8576 |
| 3.3995 | 3.71 | 7000 | 3.8251 |
| 3.3827 | 3.98 | 7500 | 3.7995 |
| 3.1568 | 4.24 | 8000 | 3.8348 |
| 3.1778 | 4.51 | 8500 | 3.8171 |
| 3.1853 | 4.77 | 9000 | 3.7963 |
| 3.1451 | 5.04 | 9500 | 3.8059 |
| 2.9278 | 5.31 | 10000 | 3.8298 |
| 2.9608 | 5.57 | 10500 | 3.8176 |
| 2.9762 | 5.84 | 11000 | 3.8047 |
| 2.8716 | 6.1 | 11500 | 3.8433 |
| 2.7239 | 6.37 | 12000 | 3.8523 |
| 2.7435 | 6.63 | 12500 | 3.8541 |
| 2.7524 | 6.9 | 13000 | 3.8446 |
| 2.6032 | 7.16 | 13500 | 3.8854 |
| 2.5322 | 7.43 | 14000 | 3.8967 |
| 2.5369 | 7.69 | 14500 | 3.8983 |
| 2.5467 | 7.96 | 15000 | 3.8966 |
| 2.3979 | 8.22 | 15500 | 3.9284 |
| 2.3767 | 8.49 | 16000 | 3.9334 |
| 2.3852 | 8.75 | 16500 | 3.9357 |
| 2.3805 | 9.02 | 17000 | 3.9395 |
| 2.3012 | 9.28 | 17500 | 3.9463 |
| 2.3044 | 9.55 | 18000 | 3.9484 |
| 2.3007 | 9.81 | 18500 | 3.9486 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mrm8488/falcoder-7b
|
mrm8488
| 2023-06-19T22:10:37Z | 29 | 89 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"generated_from_trainer",
"code",
"coding",
"custom_code",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"doi:10.57967/hf/0789",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T21:26:49Z |
---
tags:
- generated_from_trainer
- code
- coding
model-index:
- name: FalCoder
results: []
license: apache-2.0
language:
- code
thumbnail: https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png" alt="falcoder logo"">
</div>
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FalCoder 🦅👩💻
**Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description 🧠
[Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)
## Training and evaluation data 📚
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### Training hyperparameters ⚙
TBA
### Training results 🗒️
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 100 | 0.798500 | 0.767996 |
| 200 | 0.725900 | 0.749880 |
| 300 | 0.669100 | 0.748029 |
| 400 | 0.687300 | 0.742342 |
| 500 | 0.579900 | 0.736735 |
### Example of usage 👩💻
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer
model_id = "mrm8488/falcoder-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs
):
prompt = instruction + "\n### Solution:\n"
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Solution:")[1].lstrip("\n")
instruction = "Design a class for representing a person in Python."
print(generate(instruction))
```
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { falcoder-7b (Revision e061237) },
year = 2023,
url = { https://huggingface.co/mrm8488/falcoder-7b },
doi = { 10.57967/hf/0789 },
publisher = { Hugging Face }
}
```
|
gokuls/hbertv1-Massive-intent_48_w_in
|
gokuls
| 2023-06-19T22:08:47Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T21:59:47Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent_48_w_in
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8735858337432366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent_48_w_in
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8264
- Accuracy: 0.8736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6907 | 1.0 | 180 | 0.8443 | 0.7777 |
| 0.7472 | 2.0 | 360 | 0.6977 | 0.8210 |
| 0.5222 | 3.0 | 540 | 0.6538 | 0.8352 |
| 0.3848 | 4.0 | 720 | 0.6461 | 0.8357 |
| 0.284 | 5.0 | 900 | 0.6195 | 0.8524 |
| 0.2051 | 6.0 | 1080 | 0.6218 | 0.8574 |
| 0.149 | 7.0 | 1260 | 0.6915 | 0.8495 |
| 0.1108 | 8.0 | 1440 | 0.7420 | 0.8574 |
| 0.0806 | 9.0 | 1620 | 0.7204 | 0.8549 |
| 0.0565 | 10.0 | 1800 | 0.7570 | 0.8603 |
| 0.0355 | 11.0 | 1980 | 0.7622 | 0.8677 |
| 0.0246 | 12.0 | 2160 | 0.8344 | 0.8647 |
| 0.0124 | 13.0 | 2340 | 0.8276 | 0.8682 |
| 0.0072 | 14.0 | 2520 | 0.8264 | 0.8736 |
| 0.0042 | 15.0 | 2700 | 0.8328 | 0.8736 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Brendan/refpydst-100p-referredstates
|
Brendan
| 2023-06-19T21:49:31Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T21:49:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-100p-referredstates-referred-states
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 100% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-100p-referredstates-referred-states')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
model = AutoModel.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-100p-referredstates-referred-states)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45810 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 15300,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ducdh1210/dolly-lora-230619-2
|
ducdh1210
| 2023-06-19T21:30:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T21:30:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
namedotpg/dqn-SpaceInvadersTraining
|
namedotpg
| 2023-06-19T21:26:39Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T21:26:01Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 488.50 +/- 158.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga namedotpg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga namedotpg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga namedotpg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jorgelzn/Reinforce-cartpole_v1
|
jorgelzn
| 2023-06-19T21:21:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T21:21:24Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NasimB/distilgpt2-concat
|
NasimB
| 2023-06-19T21:02:23Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T18:28:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: distilgpt2-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-concat
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7514 | 0.29 | 500 | 5.6224 |
| 5.3454 | 0.58 | 1000 | 5.1814 |
| 4.9931 | 0.87 | 1500 | 4.9290 |
| 4.7222 | 1.16 | 2000 | 4.7811 |
| 4.5672 | 1.45 | 2500 | 4.6657 |
| 4.4669 | 1.74 | 3000 | 4.5721 |
| 4.3738 | 2.02 | 3500 | 4.4939 |
| 4.175 | 2.31 | 4000 | 4.4613 |
| 4.1659 | 2.6 | 4500 | 4.4128 |
| 4.1369 | 2.89 | 5000 | 4.3666 |
| 3.9858 | 3.18 | 5500 | 4.3656 |
| 3.9337 | 3.47 | 6000 | 4.3419 |
| 3.9348 | 3.76 | 6500 | 4.3095 |
| 3.8826 | 4.05 | 7000 | 4.3066 |
| 3.7106 | 4.34 | 7500 | 4.3104 |
| 3.7404 | 4.63 | 8000 | 4.2893 |
| 3.7459 | 4.92 | 8500 | 4.2648 |
| 3.5695 | 5.21 | 9000 | 4.2984 |
| 3.536 | 5.49 | 9500 | 4.2887 |
| 3.5604 | 5.78 | 10000 | 4.2711 |
| 3.5007 | 6.07 | 10500 | 4.2900 |
| 3.3477 | 6.36 | 11000 | 4.3013 |
| 3.3629 | 6.65 | 11500 | 4.2906 |
| 3.3771 | 6.94 | 12000 | 4.2814 |
| 3.211 | 7.23 | 12500 | 4.3131 |
| 3.1938 | 7.52 | 13000 | 4.3124 |
| 3.21 | 7.81 | 13500 | 4.3093 |
| 3.159 | 8.1 | 14000 | 4.3204 |
| 3.0726 | 8.39 | 14500 | 4.3257 |
| 3.0762 | 8.68 | 15000 | 4.3269 |
| 3.0834 | 8.96 | 15500 | 4.3257 |
| 3.0173 | 9.25 | 16000 | 4.3311 |
| 3.0116 | 9.54 | 16500 | 4.3325 |
| 3.0155 | 9.83 | 17000 | 4.3325 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ducdh1210/dolly-lora-230619
|
ducdh1210
| 2023-06-19T21:01:31Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T21:01:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
bsuutari/path_to_saved_model
|
bsuutari
| 2023-06-19T20:58:31Z | 57 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T20:49:13Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bsuutari/path_to_saved_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Brendan/refpydst-100p-referredstates-referred-states
|
Brendan
| 2023-06-19T20:50:01Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:30:22Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-100p-referredstates-referred-states
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 100% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-100p-referredstates-referred-states')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
model = AutoModel.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-100p-referredstates-referred-states)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45810 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 15300,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-referredstates-split-v2
|
Brendan
| 2023-06-19T20:50:00Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:29:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 435 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-referredstates-split-v3
|
Brendan
| 2023-06-19T20:50:00Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:29:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v3
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v3')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 483 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-referredstates-split-v1
|
Brendan
| 2023-06-19T20:50:00Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:10:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v1
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v1')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 437 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-icdst-split-v1
|
Brendan
| 2023-06-19T20:49:58Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:28:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-icdst-split-v1
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-icdst-split-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-icdst-split-v1')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-icdst-split-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-icdst-split-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 437 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-icdst-split-v2
|
Brendan
| 2023-06-19T20:49:52Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:28:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-icdst-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-icdst-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-icdst-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-icdst-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-icdst-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 435 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
IABCD/eduedudiffusion
|
IABCD
| 2023-06-19T20:49:50Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T19:33:34Z |
---
license: cc-by-nc-nd-4.0
tags:
- text-to-image
- stable-diffusion
---
### EduEduDiffusion0.2 Dreambooth model trained by nicolasdec for EduEdu
Test the concept via [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Training version 0.2.
Positive Prompts: PROMPT, (eduedu) style, illustration, vector, cartoon lighting
Negatives: bad anatomy, ugly, missing arms, bad proportions, tiling, missing legs, blurry, poorly drawn feet, morbid, cloned face, extra limbs, mutated hands, cropped, disfigured, mutation, deformed, deformed, mutilated, dehydrated, body out of frame, out of frame, disfigured, bad anatomy, poorly drawn face, duplicate, cut off, poorly drawn hands, error, low contrast, signature, extra arms, underexposed, text, extra fingers, overexposed, too many fingers, extra legs, bad art, ugly, extra limbs, beginner, username, fused fingers, amateur, watermark, gross proportions, distorted face, worst quality, jpeg artifacts, low quality, malformed limbs, long neck, lowres, poorly Rendered face, low resolution, low saturation, bad composition, Images cut out at the top, left, right, bottom, deformed body features, poorly rendered hands
|
Brendan/refpydst-5p-referredstates-split-v3
|
Brendan
| 2023-06-19T20:49:43Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:27:45Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-referredstates-split-v3
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-referredstates-split-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-referredstates-split-v3')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-referredstates-split-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-referredstates-split-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2233 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-5p-referredstates-split-v1
|
Brendan
| 2023-06-19T20:49:39Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:27:21Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-referredstates-split-v1
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-referredstates-split-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-referredstates-split-v1')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-referredstates-split-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-referredstates-split-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2276 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-5p-icdst-split-v3
|
Brendan
| 2023-06-19T20:49:28Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:26:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-icdst-split-v3
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-icdst-split-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-icdst-split-v3')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-icdst-split-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-icdst-split-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2233 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-10p-referredstates-split-v2
|
Brendan
| 2023-06-19T20:49:24Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:23:52Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-10p-referredstates-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 10% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-10p-referredstates-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-10p-referredstates-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-10p-referredstates-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-10p-referredstates-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4566 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 1600,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-9
|
mrizalf7
| 2023-06-19T20:40:01Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-19T17:28:21Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-finetuned-small-squad-indonesian-rizal-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-finetuned-small-squad-indonesian-rizal-9
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6372 | 1.0 | 4128 | 1.7537 |
| 1.3958 | 2.0 | 8256 | 1.7289 |
| 1.2485 | 3.0 | 12384 | 1.7340 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
cosimoiaia/Loquace-70m
|
cosimoiaia
| 2023-06-19T20:21:56Z | 182 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"alpaca",
"llama",
"llm",
"finetune",
"Italian",
"qlora",
"conversational",
"it",
"dataset:cosimoiaia/Loquace-102k",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T05:18:49Z |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-70m
# 🇮🇹 Loquace-70m 🇮🇹
An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹
The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs
using dataset of a specific language.
The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.
## Model Description
Loquace-70m is the smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian.
The related code can be found at: https://github.com/cosimoiaia/Loquace
Loquace-70m is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B.
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-70m", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
"cosimoiaia/Loquace-70m",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-70m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 10000 iterations and took 6 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-70m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|
andrewsiah/dqn-SpaceInvadersNoFrameskip-v4
|
andrewsiah
| 2023-06-19T20:19:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T20:18:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.00 +/- 288.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga andrewsiah -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga andrewsiah -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga andrewsiah
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SimsConsulting/GPT2-From-Scratch
|
SimsConsulting
| 2023-06-19T20:14:58Z | 145 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T04:14:45Z |
---
license: apache-2.0
---
Here I provide you with a completely un-trained, from scratch model of GPT2.
Which is the 124M parameter version.
This has had all of it's weights randomized and then saved wiping out all previous training.
It was trained for 50 epochs on the original book "Peter Pan" just so I can get the save and tokenization files to upload to hugging face.
So, it is surprisingly almost coherent if you test it to the right with the example text and pressing "compute" just a interesting side note.
What is this and how is it different? This is different than simply downloading a new 'gpt2' because all pre-training has been wiped out (except for the 50 epochs I mentioned).
WHY?! This allows you to train the model from scratch which leaves open more parameters for training specifically for your use-case!
You can see more examples on the original gpt model card page @ https://huggingface.co/gpt2
Example usage:
requirements: pip install transformers
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Substitute 'your_model_name' with the name of your model
model_name_or_path = 'your_model_name'
# Load pre-trained model tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(model_name_or_path)
# Load pre-trained model
model = GPT2LMHeadModel.from_pretrained(model_name_or_path)
# Model input
input_text = "Hello, how are you?"
# Encode input text
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generate output
output = model.generate(input_ids, max_length=50, num_return_sequences=1, temperature=0.7)
# Decode output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
License: Apache 2.0 The Apache 2.0 license allows software developers to alter the source code of existing software's source code,
copy the original source code or update the source code. Furthermore, developers can then distribute any copies or modifications that
they make of the software's source code.
COMMERCIAL USE: YES
PERSONAL USE: YES
EDUCATIONAL USE: YES
Enjoy!
|
sd-concepts-library/mersh
|
sd-concepts-library
| 2023-06-19T20:08:53Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-06-19T20:08:51Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Mersh on Stable Diffusion
This is the `<lolcowmersh>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
agshruti/distilbert-base-uncased-finetuned-imdb
|
agshruti
| 2023-06-19T19:48:52Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-19T17:57:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: agshruti/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# agshruti/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8576
- Validation Loss: 2.5515
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8576 | 2.5515 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nev/byt5-song-lyrics
|
nev
| 2023-06-19T19:47:23Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"music",
"byt5",
"en",
"license:isc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-19T10:30:21Z |
---
language:
- en
tags:
- music
- t5
- byt5
license: "isc"
metrics:
- accuracy
---
# ByT5 Song Lyrics
This is a Seq2Seq model trained on a karaoke dataset to predict syllables with pitch and timing from song lyrics.
As of writing, the model has only been trained on 1/2 of the full dataset. Expect the quality to improve later.
The Huggingface demo seems to produce outputs with a small sequence length. So what you see on the right will only make a prediction for the first two syllables.
|
wesleyacheng/sms-spam-classification-with-bert
|
wesleyacheng
| 2023-06-19T19:39:06Z | 8,660 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sms_spam",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T05:30:59Z |
---
license: apache-2.0
datasets:
- sms_spam
language:
- en
metrics:
- f1
- accuracy
pipeline_tag: text-classification
widget:
- text: +26.787$ burn out in 24 hours, Let it have drowned, http://bit.ly/7ayp
example_title: Spam Example
- text: Hey want to cook something together tonight?
example_title: Ham Example
---
First posted in my [Kaggle](https://www.kaggle.com/code/wesleyacheng/sms-spam-classification-with-bert).
You know what really grinds my gears. Spam! 😤
I made a sms spam classifier using [transfer learning](https://en.wikipedia.org/wiki/Transfer_learning) on [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) with a [Singaporean SMS Spam dataset](https://huggingface.co/datasets/sms_spam).
|
mrm8488/falcon-7b-ft-codeAlpaca_20k-v2
|
mrm8488
| 2023-06-19T19:38:41Z | 0 | 11 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-19T18:36:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-ft-codeAlpaca_20k-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-ft-codeAlpaca_20k-v2
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 550
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7985 | 0.35 | 100 | 0.7680 |
| 0.7259 | 0.71 | 200 | 0.7499 |
| 0.6691 | 1.06 | 300 | 0.7480 |
| 0.6873 | 1.42 | 400 | 0.7423 |
| 0.5799 | 1.77 | 500 | 0.7367 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
digiplay/epi_2.5Dphotogodess_diffusers
|
digiplay
| 2023-06-19T19:24:00Z | 385 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-27T12:02:45Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/26761?modelVersionId=36352
Version 3
Original Author's DEMO images:
,%20(hyperrealistic),%20mh-yk,%201girl,%20%20solo,%20brown%20hair,%20brown%20eyes,%20,%20long%20hair,%20chinese%20clothes,%20twintails,%20outdoor,.jpeg)
|
Marfuen98/photorealistic-1
|
Marfuen98
| 2023-06-19T19:01:19Z | 9 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-01T20:21:14Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/43331?modelVersionId=94640
|
greenw0lf/wav2vec2-large-xls-r-1b-frisian-cv-8-1h
|
greenw0lf
| 2023-06-19T19:00:15Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-31T18:39:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-1b-frisian-cv-8-1h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: validation
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.23732323953720896
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: test
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.25404682757623936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-frisian-cv-8-1h
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4120
- Wer: 0.2373
And on the test set:
- Wer: 0.2540
## Model description
This model has been developed for my Master's thesis in "Voice Technology" at Rijksuniversiteit Groningen - Campus Fryslân. It corresponds to experiment 4 where
I use as training set 1 hour of Frisian speech randomly selected from all validated data except the test and evaluation sets.
## Intended uses & limitations
The intended use is for recognizing Frisian speech.
Limitations include no LM rescoring and using version 8.0 of Common Voice instead of 13.0.
## Training and evaluation data
The evaluation split used is the one available in the Common Voice 8.0 Frisian subset. The train split is 1 hour of Frisian randomly selected from validated data except for the recordings from test and evaluation splits.
## Training procedure
The script used for training this model can be found in this GitHub repository: [link](https://github.com/greenw0lf/MSc-VT-Thesis/).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2987 | 4.35 | 100 | 3.0210 | 1.0 |
| 3.1424 | 8.7 | 200 | 2.9611 | 1.0 |
| 2.6299 | 13.04 | 300 | 0.9929 | 0.8377 |
| 1.3134 | 17.39 | 400 | 0.5679 | 0.5264 |
| 0.9747 | 21.74 | 500 | 0.4516 | 0.3764 |
| 0.8755 | 26.09 | 600 | 0.4515 | 0.3403 |
| 0.7227 | 30.43 | 700 | 0.4169 | 0.3211 |
| 0.6634 | 34.78 | 800 | 0.4159 | 0.2962 |
| 0.5568 | 39.13 | 900 | 0.4081 | 0.2795 |
| 0.7943 | 43.48 | 1000 | 0.4090 | 0.2709 |
| 0.5537 | 47.83 | 1100 | 0.4239 | 0.2649 |
| 0.5596 | 52.17 | 1200 | 0.4029 | 0.2561 |
| 0.5523 | 56.52 | 1300 | 0.4073 | 0.2524 |
| 0.4579 | 60.87 | 1400 | 0.4098 | 0.2470 |
| 0.6477 | 65.22 | 1500 | 0.4099 | 0.2446 |
| 0.4957 | 69.57 | 1600 | 0.4167 | 0.2475 |
| 0.3246 | 73.91 | 1700 | 0.4146 | 0.2389 |
| 0.3937 | 78.26 | 1800 | 0.4120 | 0.2373 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
digiplay/fantasticmix2.5D_test
|
digiplay
| 2023-06-19T18:59:40Z | 272 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-26T18:03:23Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
fantasticmix2.5D
https://civitai.com/models/20632?modelVersionId=39725
Version 2
Original Author's DEMO image :

|
Tyrranen/ppo-Huggy
|
Tyrranen
| 2023-06-19T18:56:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-19T18:56:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Tyrranen/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fedbor/secondo_modello
|
fedbor
| 2023-06-19T18:55:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T18:55:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
amangarg98/my_awesome_model
|
amangarg98
| 2023-06-19T18:51:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T18:40:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amangarg98/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amangarg98/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0266
- Validation Loss: 0.0126
- Train Accuracy: 0.9953
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3492, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0266 | 0.0126 | 0.9953 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MUmairAB/English_to_French_Translation_Transformer
|
MUmairAB
| 2023-06-19T18:46:14Z | 1 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-18T08:50:01Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | RMSprop |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | 100 |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| rho | 0.9 |
| momentum | 0.0 |
| epsilon | 1e-07 |
| centered | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
sngsfydy/models
|
sngsfydy
| 2023-06-19T18:45:33Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-17T16:34:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4704
- Accuracy: 0.8182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4144 | 0.99 | 20 | 0.9938 | 0.7 |
| 0.7896 | 1.98 | 40 | 0.7022 | 0.7152 |
| 0.6191 | 2.96 | 60 | 0.6079 | 0.7636 |
| 0.6114 | 4.0 | 81 | 0.5554 | 0.7939 |
| 0.5365 | 4.99 | 101 | 0.5233 | 0.8152 |
| 0.4989 | 5.98 | 121 | 0.4934 | 0.8303 |
| 0.5111 | 6.96 | 141 | 0.5181 | 0.8 |
| 0.476 | 8.0 | 162 | 0.4844 | 0.8182 |
| 0.4655 | 8.99 | 182 | 0.4870 | 0.8152 |
| 0.4335 | 9.98 | 202 | 0.4802 | 0.8242 |
| 0.44 | 10.96 | 222 | 0.4776 | 0.8182 |
| 0.3989 | 12.0 | 243 | 0.4804 | 0.8182 |
| 0.4007 | 12.99 | 263 | 0.4768 | 0.8242 |
| 0.3987 | 13.98 | 283 | 0.4610 | 0.8303 |
| 0.3922 | 14.96 | 303 | 0.4578 | 0.8212 |
| 0.3924 | 16.0 | 324 | 0.4804 | 0.8182 |
| 0.3995 | 16.99 | 344 | 0.4736 | 0.8121 |
| 0.3623 | 17.98 | 364 | 0.4715 | 0.8121 |
| 0.3621 | 18.96 | 384 | 0.4671 | 0.8212 |
| 0.3629 | 19.75 | 400 | 0.4704 | 0.8182 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
PauloNeto36/layoutxlm-finetuned-xfund-fr
|
PauloNeto36
| 2023-06-19T18:34:34Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-16T23:49:40Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hopkins/ss-10k
|
hopkins
| 2023-06-19T18:19:03Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T18:07:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: ss-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ss-10k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 18
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1881 | 15.38 | 200 | 5.8726 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.12.0
- Tokenizers 0.13.3
|
UnaiGurbindo/ppo-LunarLander-v2
|
UnaiGurbindo
| 2023-06-19T18:13:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T18:13:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.59 +/- 20.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tanmayyyj/Cartpole-v1
|
tanmayyyj
| 2023-06-19T17:55:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T17:55:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 496.50 +/- 10.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v3
|
hassansoliman
| 2023-06-19T17:52:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T17:51:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ABAtanasov/Taxi-v3
|
ABAtanasov
| 2023-06-19T17:49:56Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T17:47:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ABAtanasov/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CodyKilpatrick/a2c-PandaReachDense-v2
|
CodyKilpatrick
| 2023-06-19T17:47:06Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T17:23:42Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.16 +/- 0.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mrm8488/falcon-7b-ft-codeAlpaca_20k
|
mrm8488
| 2023-06-19T17:35:58Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-19T14:46:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-ft-codeAlpaca_20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-ft-codeAlpaca_20k
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7623 | 0.18 | 50 | 0.7899 |
| 0.7985 | 0.35 | 100 | 0.7680 |
| 0.7551 | 0.53 | 150 | 0.7570 |
| 0.7261 | 0.71 | 200 | 0.7499 |
| 0.8184 | 0.89 | 250 | 0.7461 |
| 0.7067 | 1.06 | 300 | 0.7480 |
| 0.6801 | 1.24 | 350 | 0.7463 |
| 0.6432 | 1.42 | 400 | 0.7423 |
| 0.7141 | 1.6 | 450 | 0.7398 |
| 0.669 | 1.77 | 500 | 0.7383 |
| 0.7177 | 1.95 | 550 | 0.7342 |
| 0.6419 | 2.13 | 600 | 0.7553 |
| 0.6395 | 2.3 | 650 | 0.7510 |
| 0.6255 | 2.48 | 700 | 0.7498 |
| 0.5556 | 2.66 | 750 | 0.7474 |
| 0.6592 | 2.84 | 800 | 0.7470 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hungngo04/cluster_to_text
|
hungngo04
| 2023-06-19T17:28:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T16:06:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: cluster_to_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cluster_to_text
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0608
- Bleu: 39.5087
- Gen Len: 10.2429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8864 | 1.0 | 4678 | 1.5653 | 17.9224 | 10.3526 |
| 1.6271 | 2.0 | 9356 | 1.3336 | 26.9113 | 10.2905 |
| 1.4621 | 3.0 | 14034 | 1.1952 | 32.9922 | 10.2873 |
| 1.3908 | 4.0 | 18712 | 1.1183 | 36.6438 | 10.2917 |
| 1.3385 | 5.0 | 23390 | 1.0753 | 38.768 | 10.2479 |
| 1.3138 | 6.0 | 28068 | 1.0608 | 39.5087 | 10.2429 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
UnaiGurbindo/ppo-Huggy
|
UnaiGurbindo
| 2023-06-19T17:28:26Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-19T17:28:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: UnaiGurbindo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gilang21/Sarah
|
gilang21
| 2023-06-19T16:57:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T16:51:56Z |
---
license: creativeml-openrail-m
---
|
eolang/SW-v1
|
eolang
| 2023-06-19T16:54:25Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"sw",
"dataset:xnli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-07T22:52:13Z |
---
datasets:
- xnli
language:
- sw
library_name: transformers
examples: null
widget:
- text: Joe Bidden ni rais wa [MASK].
example_title: Sentence 1
- text: Tumefanya mabadiliko muhimu [MASK] sera zetu za faragha na vidakuzi
example_title: Sentence 2
- text: Mtoto anaweza kupoteza [MASK] kabisa
example_title: Sentence 3
---
# SW
## Model description
This is a transformers model pre-trained on a large corpus of Swahili data in a self-supervised fashion. This means it
was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pre-trained with one objective:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the terms one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
This way, the model learns an inner representation of the Swahili language that can then be used to extract features
useful for downstream tasks e.g.
* Named Entity Recognition (Token Classification)
* Text Classification
The model is based on the Orginal BERT UNCASED which can be found on [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md)
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's primarily intended to be fine-tuned on a downstream task.
### How to use
You can use this model directly with a pipeline for masked language modeling:
#### Tokenizer
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("eolang/SW-v1")
text = "Hii ni tovuti ya idhaa ya Kiswahili ya BBC ambayo hukuletea habari na makala kutoka Afrika na kote duniani kwa lugha ya Kiswahili."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
print(output)
```
#### Fill Mask Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("eolang/SW-v1")
model = AutoModelForMaskedLM.from_pretrained("eolang/SW-v1")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
sample_text = "Tumefanya mabadiliko muhimu [MASK] sera zetu za faragha na vidakuzi"
for prediction in fill_mask(sample_text):
print(f"{prediction['sequence']}, confidence: {prediction['score']}")
```
### Limitations and Bias
Even if the training data used for this model could be reasonably neutral, this model can have biased predictions.
This is something I'm still working on improving. Feel free to share suggestions/comments via [Discussions](https://huggingface.co/eolang/SW-v1/discussions)
|
elberaguilar/finetuning-sentiment-model-3000-samples
|
elberaguilar
| 2023-06-19T16:43:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T04:20:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1583
- Accuracy: 0.9493
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sevdeawesome/Taxi-v3
|
sevdeawesome
| 2023-06-19T16:35:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T16:33:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
|
bvkbharadwaj/whisper-small-sanskasr
|
bvkbharadwaj
| 2023-06-19T16:31:12Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sa",
"dataset:addy88/sanskrit-asr-84",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-08T12:48:54Z |
---
language:
- sa
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- addy88/sanskrit-asr-84
model-index:
- name: Whisper Small Sanskasr - bvkb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sanskasr - bvkb
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the addy88/sanskrit-asr-84 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 100
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Narotomaki/kimihimee
|
Narotomaki
| 2023-06-19T16:30:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T14:34:16Z |
---
license: creativeml-openrail-m
---
|
uomnf97/klue-roberta-finetuned-korquad-v2
|
uomnf97
| 2023-06-19T16:26:44Z | 187 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"korean",
"klue",
"korquad",
"ko",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-19T14:43:56Z |
---
language: ko
tags:
- korean
- klue
- korquad
metrics:
- exact_match
- f1
library_name: transformers
---
# 🧑🏻💻 KLUE RoBERTa Large
- 이 모델은 klue/roberta-large를 한국어 Machine Reading Comprehension를 위해 KorQuAD 데이터 2.1 version 27,423개의 데이터를 학습시켜 만든 모델입니다.
<br>
# 📝 What Should Know
- KorQuAD v2.1의 원본 데이터가 아닌 하이퍼링크, 태그, 유니코드 BOM를 제거하여 전처리를 하였고, context 길이가 7500이 넘어간 데이터들은 제외하여 27,423개의 데이터셋을 이용하여 학습시켰습니다.
- 원본 데이터 링크 : https://korquad.github.io/
<br>
# 📁 Getting Started
```python
from transformers import AutoConfig, AutoModelForQuestionAnswering, AutoTokenizer
config = AutoConfig.from_pretrained('uomnf97/klue-roberta-finetuned-korquad-v2')
tokenizer = AutoTokenizer.from_pretrained('uomnf97/klue-roberta-finetuned-korquad-v2')
model = AutoModelForQuestionAnswering.from_pretrained('uomnf97/klue-roberta-finetuned-korquad-v2',config=config)
```
|
Deojaklah/deaa
|
Deojaklah
| 2023-06-19T16:05:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T16:04:44Z |
---
license: creativeml-openrail-m
---
|
Mollel/alpaca-tweets-sentiment
|
Mollel
| 2023-06-19T16:04:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T15:53:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Nerternal/CLIPFixedModels
|
Nerternal
| 2023-06-19T16:00:24Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-19T15:43:50Z |
Models with [fixed CLIP tensors](https://rentry.org/clipfix) using MBW.
|
Noahhow/Gragas
|
Noahhow
| 2023-06-19T15:47:32Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"Lol",
"League of legends ",
"audio-to-audio",
"en",
"dataset:tiiuae/falcon-refinedweb",
"license:creativeml-openrail-m",
"region:us"
] |
audio-to-audio
| 2023-06-19T15:38:07Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
metrics:
- charcut_mt
pipeline_tag: audio-to-audio
tags:
- Lol
- 'League of legends '
license: creativeml-openrail-m
library_name: adapter-transformers
---
|
Aconit13/opus-mt-en-ro-finetuned-en-to-ro
|
Aconit13
| 2023-06-19T15:34:20Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T14:59:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheFools/Nurhyt
|
TheFools
| 2023-06-19T15:30:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T15:21:31Z |
---
license: creativeml-openrail-m
---
|
andrewsiah/q-FrozenLake-v1-4x4-noSlippery
|
andrewsiah
| 2023-06-19T15:16:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T15:16:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andrewsiah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Heng666/falcon-7b-sharded-bf16-english-quote-qlora
|
Heng666
| 2023-06-19T15:10:33Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T15:05:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ann-stro/roberta_token_new
|
ann-stro
| 2023-06-19T15:05:59Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"token-classification",
"license:cc-by-nc-nd-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-19T14:55:56Z |
---
license: cc-by-nc-nd-3.0
---
|
Keithulu/distilgpt2-finetuned-python-stack
|
Keithulu
| 2023-06-19T15:02:19Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T14:49:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-python-stack
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-python-stack
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 3.1229 |
| No log | 2.0 | 182 | 2.9666 |
| No log | 3.0 | 273 | 2.9321 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/bert_base_120
|
gokuls
| 2023-06-19T14:58:13Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-18T13:24:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_120
This model is a fine-tuned version of [gokuls/bert_base_96](https://huggingface.co/gokuls/bert_base_96) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3904
- Accuracy: 0.5602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7403 | 0.08 | 10000 | 2.6150 | 0.5307 |
| 2.6939 | 0.16 | 20000 | 2.5743 | 0.5360 |
| 2.6549 | 0.25 | 30000 | 2.5380 | 0.5408 |
| 2.6298 | 0.33 | 40000 | 2.5020 | 0.5455 |
| 2.5883 | 0.41 | 50000 | 2.4715 | 0.5494 |
| 2.5629 | 0.49 | 60000 | 2.4432 | 0.5533 |
| 2.5274 | 0.57 | 70000 | 2.4163 | 0.5568 |
| 2.5059 | 0.66 | 80000 | 2.3904 | 0.5602 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
teddy0413/Accounting_glm0619
|
teddy0413
| 2023-06-19T14:55:02Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T14:54:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
syf2023/gpt2
|
syf2023
| 2023-06-19T14:53:15Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T14:49:39Z |
---
language: en
tags:
- exbert
license: mit
duplicated_from: gpt2
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
xusenlin/duee-gplinker
|
xusenlin
| 2023-06-19T14:53:10Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"event extraction",
"zh",
"dataset:DuEE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-19T14:22:12Z |
---
language:
- zh
tags:
- event extraction
license: apache-2.0
datasets:
- DuEE
metrics:
- f1
---
# GPLinker事件抽取模型
## 模型介绍
+ 数据集:百度 `DUEE` 信息抽取
+ 模型方法:[GPLinker:基于GlobalPointer的事件联合抽取](https://spaces.ac.cn/archives/8926)
## 使用方法
```commandline
pip install litie
```
```python
from pprint import pprint
from litie.pipelines import EventExtractionPipeline
pipeline = EventExtractionPipeline("gplinker", model_name_or_path="xusenlin/duee-gplinker", model_type="bert")
text = "油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓。"
pprint(pipeline(text))
# 输出
[
[
{
"event_type": "组织关系-裁员",
"arguments": [
{
"role": "裁员人数",
"argument": "650人"
},
{
"role": "裁员方",
"argument": "油服巨头哈里伯顿"
}
]
}
]
]
```
模型训练和推理的详细代码见 [litie](https://github.com/xusenlinzy/lit-ie)
|
ManuD/speecht5_finetuned_voxpopuli_de_Merkel
|
ManuD
| 2023-06-19T14:51:50Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-18T22:46:49Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_voxpopuli_de_Merkel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_de_Merkel
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4831 | 4.06 | 1000 | 0.4406 |
| 0.4583 | 8.12 | 2000 | 0.4271 |
| 0.4482 | 12.18 | 3000 | 0.4177 |
| 0.4435 | 16.24 | 4000 | 0.4148 |
| 0.433 | 20.3 | 5000 | 0.4142 |
| 0.4333 | 24.37 | 6000 | 0.4128 |
| 0.4306 | 28.43 | 7000 | 0.4111 |
| 0.4288 | 32.49 | 8000 | 0.4110 |
| 0.4262 | 36.55 | 9000 | 0.4109 |
| 0.4228 | 40.61 | 10000 | 0.4112 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.