modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
asure22/dbert_qa_model_070623
|
asure22
| 2023-06-08T12:58:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-08T02:38:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: dbert_qa_model_070623
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbert_qa_model_070623
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5076 |
| 2.746 | 2.0 | 500 | 1.8158 |
| 2.746 | 3.0 | 750 | 1.7495 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
necrashter/SaShiMi-796
|
necrashter
| 2023-06-08T12:57:01Z | 0 | 1 |
pytorch
|
[
"pytorch",
"arxiv:2202.09729",
"license:mit",
"region:us"
] | null | 2023-06-07T15:42:35Z |
---
license: mit
library_name: pytorch
---
# SaShiMi-796
This repository contains pre-trained models for [SaShiMi-796](https://github.com/necrashter/SaShiMi-796), a replication of the paper ["It's Raw! Audio Generation with State-Space Models"](https://arxiv.org/abs/2202.09729) from scratch in PyTorch.
It was developed as a part of the course [METU CENG 796 Deep Generative Models](https://user.ceng.metu.edu.tr/~gcinbis/courses/Spring23/CENG796/index.html) in Spring 2023.
See the following links for more information:
- [GitHub repository of the code](https://github.com/necrashter/SaShiMi-796)
- [Webpage containing the generated samples](https://necrashter.github.io/sashimi-796)
- [Mirror](https://user.ceng.metu.edu.tr/~ilker/sashimi/)
The models and the dataset in this repository will be automatically downloaded and extracted by `download_data.sh` script in the [GitHub repository](https://github.com/necrashter/SaShiMi-796).
In addition, there's a also a zip file containing the [Youtube Mix dataset](https://huggingface.co/datasets/krandiash/youtubemix). The only difference in our version of the dataset is that it's split into train-validation-test sets as described in the README file of the dataset.
We had to upload our own version of this dataset because it's not possible to download it from the official repository using `wget` due to authorization issues.
_Please note that the Youtube Mix dataset is not our own work ([original Youtube video](https://www.youtube.com/watch?v=EhO_MrRfftU)), hence is not covered under the same license as the model. The dataset is provided for academic and research purposes only, and it should be used as such in order to constitute fair use under the US copyright law. We take no responsibility for any copyright infringements that may take place by users who download and use this dataset._
## Reproduction Results
With an 8-layer SaShiMi model, we managed to achieve an NLL of **1.325** (in base 2) after 160 epochs.
For comparison, the result reported in the paper is **1.294**.
Although our result is slightly higher, the model in the paper was trained longer (600K steps on page 19, which would be about 400 epochs in our setup).
We believe it's reasonable to expect that our model can achieve the same or better NLL value with longer training and/or better hyperparameter choices.
Furthermore, [our generated samples](https://necrashter.github.io/sashimi-796) are similar to [the ones provided by the authors](https://hazyresearch.stanford.edu/sashimi-examples/).
Therefore, we think that we've successfully reproduced the paper.
|
paladinarbalest/ppo-LunarLander-v2
|
paladinarbalest
| 2023-06-08T12:55:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T12:54:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.17 +/- 24.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wykonos/a2c-AntBulletEnv-v0
|
wykonos
| 2023-06-08T12:41:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-07T21:37:23Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1158.41 +/- 308.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
P3ps/bert-finetuned-cross-ner-v3
|
P3ps
| 2023-06-08T12:40:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T11:20:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-cross-ner-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cross-ner-v3
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
- Precision: 0.8305
- Recall: 0.8629
- F1: 0.8464
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2023 | 1.0 | 2607 | 0.1921 | 0.7785 | 0.8197 | 0.7985 | 0.9468 |
| 0.1244 | 2.0 | 5214 | 0.1740 | 0.8211 | 0.8541 | 0.8373 | 0.9547 |
| 0.0792 | 3.0 | 7821 | 0.1790 | 0.8305 | 0.8629 | 0.8464 | 0.9559 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tmpusr/ppo-SnowballTarget
|
tmpusr
| 2023-06-08T12:39:38Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-08T11:47:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tmpusr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sadFaceEmoji/gpt-neo-1.3B-poem
|
sadFaceEmoji
| 2023-06-08T12:33:12Z | 8 | 0 |
peft
|
[
"peft",
"text-generation",
"en",
"dataset:sadFaceEmoji/english-poems",
"region:us"
] |
text-generation
| 2023-06-08T12:32:13Z |
---
library_name: peft
datasets:
- sadFaceEmoji/english-poems
language:
- en
pipeline_tag: text-generation
---
|
Yhyu13/CAMEL-13B-Combined-Data-gptq-4bit
|
Yhyu13
| 2023-06-08T12:26:19Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T12:21:10Z |
---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data
|
ighina/roberta_topseg_mean_wikicity
|
ighina
| 2023-06-08T12:16:22Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-08T12:13:35Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11254 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
davanstrien/blbooksgenre_topics
|
davanstrien
| 2023-06-08T12:14:16Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"en",
"dataset:blbooksgenre",
"license:mit",
"region:us"
] |
text-classification
| 2023-06-08T12:10:35Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
license: mit
datasets:
- blbooksgenre
language:
- en
---
# blbooksgenre_topics
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/blbooksgenre_topics")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 57
* Number of training documents: 43752
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | poems - novel - poem - prose - book | 11 | -1_poems_novel_poem_prose |
| 0 | poems - poem - poetry - poets - poetical | 18624 | 0_poems_poem_poetry_poets |
| 1 | novel - author - poem - heir - tales | 4698 | 1_novel_author_poem_heir |
| 2 | ireland - dublin - scotland - irish - edinburgh | 3576 | 2_ireland_dublin_scotland_irish |
| 3 | geography - geographical - maps - map - history | 3104 | 3_geography_geographical_maps_map |
| 4 | shakespeare - acts - prose - comedy - theatre | 1377 | 4_shakespeare_acts_prose_comedy |
| 5 | county - counties - pennsylvania - hampshire - history | 1089 | 5_county_counties_pennsylvania_hampshire |
| 6 | france - spain - europe - pyrenees - paris | 990 | 6_france_spain_europe_pyrenees |
| 7 | sailing - nautical - maritime - boat - voyages | 986 | 7_sailing_nautical_maritime_boat |
| 8 | antiquity - greeks - rome - romans - greece | 744 | 8_antiquity_greeks_rome_romans |
| 9 | illustrations - drawings - pencil - drawn - sketches | 631 | 9_illustrations_drawings_pencil_drawn |
| 10 | africa - transvaal - cape - zululand - african | 610 | 10_africa_transvaal_cape_zululand |
| 11 | egypt - egyptians - cairo - sinai - egyptian | 610 | 11_egypt_egyptians_cairo_sinai |
| 12 | england - britain - british - george - english | 570 | 12_england_britain_british_george |
| 13 | california - alaska - regions - tour - states | 546 | 13_california_alaska_regions_tour |
| 14 | italia - italy - sicily - italian - italians | 491 | 14_italia_italy_sicily_italian |
| 15 | crimean - crimea - turkey - turks - russia | 481 | 15_crimean_crimea_turkey_turks |
| 16 | mexico - rio - honduras - colombia - panama | 433 | 16_mexico_rio_honduras_colombia |
| 17 | wales - maoriland - otago - zealand - auckland | 423 | 17_wales_maoriland_otago_zealand |
| 18 | waterloo - poem - battle - napoleon - battles | 405 | 18_waterloo_poem_battle_napoleon |
| 19 | mining - mineralogy - minerals - metallurgy - metals | 396 | 19_mining_mineralogy_minerals_metallurgy |
| 20 | history - america - states - historical - american | 377 | 20_history_america_states_historical |
| 21 | geology - geological - geologists - cambrian - fossils | 305 | 21_geology_geological_geologists_cambrian |
| 22 | quebec - scotia - canadas - ontario - province | 204 | 22_quebec_scotia_canadas_ontario |
| 23 | rambles - ramble - south - lands - scrambles | 194 | 23_rambles_ramble_south_lands |
| 24 | edition - second - series - third - revised | 159 | 24_edition_second_series_third |
| 25 | rudge - barnaby - hutton - rivers - osborne | 149 | 25_rudge_barnaby_hutton_rivers |
| 26 | memorials - anniversary - memorial - london - address | 134 | 26_memorials_anniversary_memorial_london |
| 27 | railway - railways - railroad - railroads - railroadiana | 115 | 27_railway_railways_railroad_railroads |
| 28 | forest - foresters - woods - trees - forestalled | 112 | 28_forest_foresters_woods_trees |
| 29 | philosophy - humanity - philosophie - moralities - conscience | 97 | 29_philosophy_humanity_philosophie_moralities |
| 30 | gazetteer - geography - geographical - dictionary - topographical | 96 | 30_gazetteer_geography_geographical_dictionary |
| 31 | goldsmith - goldsmiths - novel - writings - epistle | 93 | 31_goldsmith_goldsmiths_novel_writings |
| 32 | regulations - members - committees - rules - committee | 89 | 32_regulations_members_committees_rules |
| 33 | odes - poems - poem - ode - hymno | 87 | 33_odes_poems_poem_ode |
| 34 | doctor - doctors - physician - patients - physicians | 79 | 34_doctor_doctors_physician_patients |
| 35 | geography - schools - longmans - colleges - school | 77 | 35_geography_schools_longmans_colleges |
| 36 | juan - juana - sequel - carlos - genista | 63 | 36_juan_juana_sequel_carlos |
| 37 | sporting - sports - sport - sportsmans - rugby | 56 | 37_sporting_sports_sport_sportsmans |
| 38 | detective - detectives - crime - policeman - city | 52 | 38_detective_detectives_crime_policeman |
| 39 | blanc - mont - blanche - montserrat - montacute | 47 | 39_blanc_mont_blanche_montserrat |
| 40 | jack - jacks - jackdaw - house - author | 46 | 40_jack_jacks_jackdaw_house |
| 41 | dutch - netherlands - holland - dutchman - dutchesse | 43 | 41_dutch_netherlands_holland_dutchman |
| 42 | spider - spiders - adventure - web - webs | 35 | 42_spider_spiders_adventure_web |
| 43 | madrasiana - madras - malabar - mysore - district | 31 | 43_madrasiana_madras_malabar_mysore |
| 44 | doncaster - 1835 - gazette - 1862 - 1868 | 31 | 44_doncaster_1835_gazette_1862 |
| 45 | lays - lay - land - empire - sea | 28 | 45_lays_lay_land_empire |
| 46 | cyprus - syria - palestine - island - asia | 28 | 46_cyprus_syria_palestine_island |
| 47 | gipsies - gipsy - snakes - encyclopaedia - bunyan | 20 | 47_gipsies_gipsy_snakes_encyclopaedia |
| 48 | abydos - bride - turkish - marriage - euphrosyne | 18 | 48_abydos_bride_turkish_marriage |
| 49 | derby - castleton - buxton - matlock - nottingham | 16 | 49_derby_castleton_buxton_matlock |
| 50 | corsair - tale - carlo - mystery - monte | 16 | 50_corsair_tale_carlo_mystery |
| 51 | bushman - bushranger - bushrangers - australian - novel | 13 | 51_bushman_bushranger_bushrangers_australian |
| 52 | months - italy - weeks - six - france | 12 | 52_months_italy_weeks_six |
| 53 | kitty - kittys - catspaw - catriona - father | 12 | 53_kitty_kittys_catspaw_catriona |
| 54 | lighthouses - lighthouse - beacons - lights - lighting | 12 | 54_lighthouses_lighthouse_beacons_lights |
| 55 | balfour - kidnapped - balfouriana - memoirs - adventures | 11 | 55_balfour_kidnapped_balfouriana_memoirs |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: 57
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11
|
angshineee/pokemon-lora
|
angshineee
| 2023-06-08T12:08:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-08T06:04:38Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - angshineee/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
kejolong/nanashe
|
kejolong
| 2023-06-08T12:02:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T12:01:05Z |
---
license: creativeml-openrail-m
---
|
TheBloke/selfee-7B-GGML
|
TheBloke
| 2023-06-08T11:56:53Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-06-08T11:24:03Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 7B GGML
These files are GGML format model files for [Kaist AI's Selfee 7B](https://huggingface.co/kaist-ai/selfee-7b-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/selfee-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/selfee-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/selfee-7B-fp16)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| selfee-7B.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB | 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| selfee-7B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB | 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-7B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB | 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-7B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB | 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| selfee-7B.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| selfee-7B.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| selfee-7B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB | 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| selfee-7B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB | 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| selfee-7B.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| selfee-7B.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| selfee-7B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB | 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| selfee-7B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB | 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| selfee-7B.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| selfee-7B.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m selfee-7B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 7B
|
ruwan/open-llama-sharded-1GB-7B-alpaca-vmware
|
ruwan
| 2023-06-08T11:43:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T09:49:12Z |
---
license: apache-2.0
---
note : use original open llama tokenizer
model_path = 'ruwan/open-llama-sharded-1GB-7B-alpaca-vmware'
tokenizer = LlamaTokenizer.from_pretrained("openlm-research/open_llama_7b")
model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto' )
|
NYTK/PULI-BERT-Large
|
NYTK
| 2023-06-08T11:39:36Z | 299 | 3 |
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"puli",
"hu",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-09T14:51:30Z |
---
language:
- hu
tags:
- fill-mask
- puli
license: cc-by-nc-4.0
widget:
- text: Mesélek egy [MASK] az oroszlánról.
---
# PULI BERT-Large
For further details, see [our demo site](https://juniper.nytud.hu/demo/nlp).
- Hungarian BERT large model (MegatronBERT)
- Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed)
- Dataset: 36.3 billion words
- Checkpoint: 1 500 000 steps
## Limitations
- max_seq_length = 1024
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-puli,
title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Hungary},
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
pages = {247--262}
}
```
## Usage
```python
from transformers import BertTokenizer, MegatronBertModel
tokenizer = BertTokenizer.from_pretrained('NYTK/PULI-BERT-Large')
model = MegatronBertModel.from_pretrained('NYTK/PULI-BERT-Large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt', do_lower_case=False)
output = model(**encoded_input)
```
|
Domo123/tanya-mama-ner
|
Domo123
| 2023-06-08T11:32:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T10:17:52Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tanya-mama-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tanya-mama-ner
This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
- Precision: 0.8193
- Recall: 0.8765
- F1: 0.8470
- Accuracy: 0.9521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 120 | 0.1731 | 0.7970 | 0.8644 | 0.8294 | 0.9441 |
| No log | 2.0 | 240 | 0.1480 | 0.8193 | 0.8765 | 0.8470 | 0.9521 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rshvetsov/ggml-quantized
|
rshvetsov
| 2023-06-08T11:24:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T08:34:07Z |
llama.cpp converted models by llama.cpp `ffb06a3`
---
license: other
License Non-commercial bespoke license
Where to send questions or comments about the model Questions and comments about LLaMA can be sent via the GitHub repository of the project , by opening an issue.
---
|
BrainTheos/whisper-base-ln
|
BrainTheos
| 2023-06-08T11:17:20Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ln",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-07T19:27:37Z |
---
language:
- ln
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Base Lingala - BrainTheos
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Fleurs
type: google/fleurs
config: ln_cd
split: validation
args: ln_cd
metrics:
- name: Wer
type: wer
value: 25.050916496945007
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Lingala - BrainTheos
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7265
- Wer: 25.0509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0081 | 21.0 | 1000 | 0.6218 | 29.8710 |
| 0.0016 | 42.01 | 2000 | 0.6865 | 25.1188 |
| 0.0009 | 63.01 | 3000 | 0.7152 | 24.9151 |
| 0.0007 | 85.0 | 4000 | 0.7265 | 25.0509 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.1.dev0
- Tokenizers 0.13.3
|
shashankjat10/lunar-lander
|
shashankjat10
| 2023-06-08T11:13:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T11:13:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.08 +/- 15.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hyeoli/BERT2BERT_Model
|
Hyeoli
| 2023-06-08T11:07:00Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-07T10:54:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: BERT2BERT_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT2BERT_Model
This model is a fine-tuned version of [patrickvonplaten/bert2bert_cnn_daily_mail](https://huggingface.co/patrickvonplaten/bert2bert_cnn_daily_mail) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 63 | 1.9514 |
| No log | 2.0 | 126 | 1.9279 |
| No log | 3.0 | 189 | 1.9223 |
| No log | 4.0 | 252 | 1.9217 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lll-yuh-lll/YuhMix
|
lll-yuh-lll
| 2023-06-08T10:55:15Z | 0 | 28 | null |
[
"stable-diffusion",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-24T01:39:47Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
## 【概要】
『YuhMix』は『Counterfeit』をベースにして他のモデルを
階層マージしたモデルです。
『Counterfeit』の優れた構図、ポーズといった表現力を活かして
**絵柄のみを変更**。他の部分は極力影響が出ないよう調整しました。
ネガティブTIの推奨は『EasyNegativeV2』です。
VAE、Steps、CGF Scale、Sampler、Upscalerの推奨はありません。
各自お好みで設定して下さい。
**マージ元モデルの作者様に多大なる感謝を**。
このモデルとマージして欲しいという要望があれば**追加を検討**します。
Twitter: [@lll_yuh_lll](https://twitter.com/lll_yuh_lll)
***
## 【マージ元モデル】
**YuhMix_A1:アニメ塗り**
Counterfeit-V3.0 + ambientmix
**YuhMix_P1:ややアニメ塗り**
Counterfeit-V3.0 + Pika's New Generation v2.0
**YuhMix_L1:フラット**
Counterfeit-V3.0 + 7th_anime_v3_B
**YuhMix_C1:フラット+可愛い**
Counterfeit-V3.0 + CuteYukiMix v3.0
***
## 【YuhMix_A1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2530832888, Size: 512x768, Model hash: 5b0478a78a, Model: YuhMix_A1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
1 girl, adventurer, has weapon, action
Negative prompt: EasyNegativeV2, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1229935043, Size: 512x768, Model hash: 5b0478a78a, Model: YuhMix_A1_fp16, Denoising strength: 0.45, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 20, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_P1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3471603083, Size: 512x768, Model hash: a8c732dd6d, Model: YuhMix_P1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
2D, 1 girl, smile, school uniform, shinjuku, night scene, magic circle, action
Negative prompt: EasyNegativeV2, 3D, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1799350649, Size: 512x768, Model hash: a8c732dd6d, Model: YuhMix_P1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_L1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 835610278, Size: 512x768, Model hash: 23eb8adb20, Model: YuhMix_L1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
2D, 1 girl, smile, idol costume, shouting into a microphone, dancing, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1192103882, Size: 512x768, Model hash: 23eb8adb20, Model: YuhMix_L1_fp16, Denoising strength: 0.55, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_C1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 986981883, Size: 512x768, Model hash: 9daf68fee9, Model: YuhMix_C1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
holy sword, cute girl
Negative prompt: EasyNegativeV2, 3D, watermark, animal ears
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2760921822, Size: 512x768, Model hash: 9daf68fee9, Model: YuhMix_C1_fp16, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp, Version: v1.3.2
```
|
MJa6/bert-base-cased-wikitext2
|
MJa6
| 2023-06-08T10:46:52Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-08T10:45:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 9.1420
- eval_runtime: 18.4655
- eval_samples_per_second: 108.798
- eval_steps_per_second: 13.647
- epoch: 0.01
- step: 27
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MJa6/gpt2-wikitext2
|
MJa6
| 2023-06-08T10:41:53Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T10:39:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.5813
- eval_runtime: 21.0482
- eval_samples_per_second: 91.884
- eval_steps_per_second: 11.497
- epoch: 0.08
- step: 184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
fatimas/gpt2-wikitext2
|
fatimas
| 2023-06-08T10:41:45Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T10:37:41Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.0633
- eval_runtime: 20.711
- eval_samples_per_second: 93.38
- eval_steps_per_second: 11.685
- epoch: 0.22
- step: 488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Khushnur/t5-small-end2end-questions-generation_squad_aug_
|
Khushnur
| 2023-06-08T10:37:27Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T09:55:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-end2end-questions-generation_squad_aug_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-end2end-questions-generation_squad_aug_
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
haddadalwi/bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
|
haddadalwi
| 2023-06-08T10:32:51Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-27T13:49:20Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.3 | 100 | 0.3653 |
| No log | 2.6 | 200 | 0.4152 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
PT-10/flan-t5-small-samsum
|
PT-10
| 2023-06-08T10:24:52Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T09:57:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: flan-t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DebeshSahoo/text2sql-finetune
|
DebeshSahoo
| 2023-06-08T10:23:38Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"text-classification",
"dataset:wikisql",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T07:10:18Z |
---
datasets:
- wikisql
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- code
---
Base Model:t5-small
#Training Result
[17610/17610 1:32:31, Epoch 9/10]
Step Training Loss Validation Loss
1000 2.682400 0.829368
2000 0.914000 0.568155
3000 0.707700 0.465733
4000 0.613500 0.408758
5000 0.557300 0.374811
6000 0.515800 0.350752
7000 0.487000 0.331517
8000 0.466100 0.319071
9000 0.449400 0.309488
10000 0.438800 0.301829
11000 0.430000 0.296482
12000 0.420200 0.292672
13000 0.418200 0.290445
14000 0.413400 0.288662
15000 0.410100 0.287757
16000 0.412600 0.287280
17000 0.410000 0.287134
question: what is id with name jui and age equal 25
table: ['id', 'name', 'age']
SELECT ID FROM table WHEREname = jui AND age equal 25
#Copy below piece of code to your notebook to use the model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("DebeshSahoo/text2sql-finetune")
# Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("DebeshSahoo/text2sql-finetune")
# Rest of the code for preparing input, generating predictions, and decoding the output...
from typing import List
table_prefix = "table:"
question_prefix = "question:"
def prepare_input(question: str, table: List[str]):
print("question:", question)
print("table:", table)
join_table = ",".join(table)
inputs = f"{question_prefix} {question} {table_prefix} {join_table}"
input_ids = tokenizer(inputs, max_length=700, return_tensors="pt").input_ids
return input_ids
def inference(question: str, table: List[str]) -> str:
input_data = prepare_input(question=question, table=table)
input_data = input_data.to(model.device)
outputs = model.generate(inputs=input_data, num_beams=10, top_k=10, max_length=512)
result = tokenizer.decode(token_ids=outputs[0], skip_special_tokens=True)
return result
test_id = 1000
print("model result:", inference(dataset["test"][test_id]["question"], dataset["test"][test_id]["table"]["header"]))
print("real result:", dataset["test"][test_id]["sql"]["human_readable"])
inference("what is id with name jui and age equal 25", ["id","name", "age"])
|
Den4ikAI/FRED-T5-XL_instructor_chitchat
|
Den4ikAI
| 2023-06-08T10:14:23Z | 114 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-04T11:16:56Z |
---
license: mit
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC6>Человек: Ответь на вопрос. Почему трава зеленая?\nБот: <extra_id_0>'
- text: '<SC1>Тебя зовут Анфиса. Тебе интересно машинное обучение.\nСобеседник сказал: Привет\nТы ответил: <extra_id_0>'
- text: '<SC6>Тебя зовут Анфиса. Тебе интересно машинное обучение.\nСобеседник сказал: Что делать, если шалят нервишки?\nТы ответил: <extra_id_0>'
---
# Den4ikAI/FRED-T5-XL_instructor_chitchat
Инструкционная модель на FRED-T5-XL. Обратите внимание на промпты в примере чит-чата.
# Пример использования [Instruct]
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, GenerationConfig
import torch
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
tokenizer = AutoTokenizer.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat")
model = AutoModelForSeq2SeqLM.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat", torch_dtype=torch.float16).to(device)
model.eval()
generation_config = GenerationConfig.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat")
def generate(prompt):
data = tokenizer(f"<SC6>Человек: {prompt}\nБот: <extra_id_0>", return_tensors="pt").to(model.device)
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
print(tokenizer.decode(data["input_ids"][0].tolist()))
out = tokenizer.decode(output_ids.tolist())
return out
while 1:
generate(input(":> "))
```
# Пример использования [Chitchat]
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat")
generation_config = transformers.GenerationConfig.from_pretrained("Den4ikAI/FRED-T5-XL_instructor_chitchat")
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
msg = msg[0].upper() + msg[1:]
dialog.append('Собеседник сказал: ' + msg)
# Данный пример промпта позволяет вести диалог и писать инструкции.
# prompt = '<SC6>Тебя зовут Анфиса. Тебе интересно машинное обучение.' + '\n'.join(dialog) + '\nТы ответил: <extra_id_0>'
# Второй пример - промпт просто для диалогов. В таком режиме не будет глюков, когда модель кидает кусок промпта в ответ.
prompt = '<SC1>Тебя зовут Анфиса. Тебе интересно машинное обучение.' + '\n'.join(dialog) + '\nТы ответил: <extra_id_0>'
input_ids = t5_tokenizer(prompt, return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids.to(device), generation_config=generation_config)
t5_output = t5_tokenizer.decode(out_ids[0][1:])
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
t5_output = t5_output.replace('<extra_id_0>', '').strip()
t5_output = t5_output.split('Собеседник')[0].strip()
print('B:> {}'.format(t5_output))
dialog.append('Ты ответил: ' + t5_output)
```
# Citation
```
@MISC{Den4ikAI/FRED-T5-XL_instructor_chitchat,
author = {Denis Petrov},
title = {Russian Instruct and Chitchat model},
url = {https://huggingface.co/Den4ikAI/FRED-T5-XL_instructor_chitchat/},
year = 2023
}
```
|
VaianiLorenzo/ViPER-VTF
|
VaianiLorenzo
| 2023-06-08T10:06:49Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-08T09:21:46Z |
# ViPER-VTF
## (Vision Text FAU)
This repository contains the checkpoints for the ViPER model.
It is a Perceiver-based model finetuned on the concatenation of visual, textual and FAU-related features.
For more information on how to use this model please refer to the following [repository](https://github.com/VaianiLorenzo/ViPER)
If you find this useful please cite:
```
@inproceedings{vaiani2022viper,
title={ViPER: Video-based Perceiver for Emotion Recognition},
author={Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
booktitle={Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge},
pages={67--73},
year={2022}
}
```
For any other question feel free to contact me at lorenzo.vaiani@polito.it
|
VaianiLorenzo/ViPER-VAT
|
VaianiLorenzo
| 2023-06-08T09:59:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T09:21:07Z |
# ViPER-VAT
## (Vision Audio Text)
This repository contains the checkpoints for the ViPER model.
It is a Perceiver-based model finetuned on the concatenation of visual, acoustic and textual features.
For more information on how to use this model please refer to the following [repository](https://github.com/VaianiLorenzo/ViPER)
If you find this useful please cite:
```
@inproceedings{vaiani2022viper,
title={ViPER: Video-based Perceiver for Emotion Recognition},
author={Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
booktitle={Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge},
pages={67--73},
year={2022}
}
```
For any other question feel free to contact me at lorenzo.vaiani@polito.it
|
mfaiq2307/faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
|
mfaiq2307
| 2023-06-08T09:50:38Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-08T08:01:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.43878832999860407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3267
- Wer: 0.4388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2549 | 11.59 | 400 | 0.6715 | 0.7735 |
| 0.3726 | 23.19 | 800 | 0.3267 | 0.4388 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.6.1
- Tokenizers 0.13.3
|
mun33b/ppo-LunarLander-v2
|
mun33b
| 2023-06-08T09:27:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T09:27:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.13 +/- 15.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nodissasemble/7CTOs-document-title-generator
|
nodissasemble
| 2023-06-08T09:04:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-07T07:55:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: 7CTOs-document-title-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 7CTOs-document-title-generator
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9983
- Rouge1: 64.6108
- Rouge2: 54.9793
- Rougel: 62.1078
- Rougelsum: 62.1233
- Gen Len: 13.916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.1616 | 1.0 | 625 | 1.0083 | 54.7061 | 43.4916 | 51.3187 | 51.3902 | 13.814 |
| 0.9933 | 2.0 | 1250 | 0.9913 | 63.5024 | 53.9145 | 60.941 | 60.9901 | 13.623 |
| 0.8891 | 3.0 | 1875 | 0.9960 | 64.8014 | 55.1159 | 62.1757 | 62.1882 | 13.839 |
| 0.7913 | 4.0 | 2500 | 0.9983 | 64.6108 | 54.9793 | 62.1078 | 62.1233 | 13.916 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
felixtran/layoutlmv3-rvl-cdip-small
|
felixtran
| 2023-06-08T09:03:18Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T04:27:28Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv3-rvl-cdip-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-rvl-cdip-small
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/MPT-7B-Instruct-GGML
|
TheBloke
| 2023-06-08T08:59:52Z | 53 | 29 |
transformers
|
[
"transformers",
"mpt",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-05-18T16:41:36Z |
---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# MPT-7B-Instruct GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [MosaicML's MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct).
This repo is the result of converting to GGML and quantising.
Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-GGML).
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML).
* [MPT-7B-Storywriter: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML).
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`mpt7b-instruct.ggmlv3.q4_0.bin` | q4_0 | 4bit | 4.16GB | 6.2GB | 4-bit. |
`mpt7b-instruct.ggmlv3.q4_1.bin` | q4_0 | 4bit | 4.99GB | 7.2GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`mpt7b-instruct.ggmlv3.q5_0.bin` | q5_0 | 5bit | 4.57GB | 6.8GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`mpt7b-instruct.ggmlv3.q5_1.bin` | q5_1 | 5bit | 4.99GB | 7.2GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`mpt7b-instruct.ggmlv3.q8_0.bin` | q8_0 | 8bit | 7.48GB | 9.7GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
`mpt7b-instruct.ggmlv3.fp16.bin` | fp16 | 16bit | 13.30GB | 16GB | Full 16-bit. |
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
zeyneppktemm/flan-t5-base-imdb-text-classification
|
zeyneppktemm
| 2023-06-08T08:50:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-07T16:03:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0767
- F1: 95.084
- Gen Len: 2.4976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dosai/bert-sud
|
dosai
| 2023-06-08T08:49:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-08T04:10:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dosai/bert-sud
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dosai/bert-sud
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1151
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 103650, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.3012 | 0 |
| 0.1827 | 1 |
| 0.1587 | 2 |
| 0.1458 | 3 |
| 0.1339 | 4 |
| 0.1275 | 5 |
| 0.1202 | 6 |
| 0.1151 | 7 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Domo123/shopee-ner
|
Domo123
| 2023-06-08T08:30:26Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T07:51:40Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: shopee-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shopee-ner
This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0924
- Precision: 0.7557
- Recall: 0.8382
- F1: 0.7948
- Accuracy: 0.9696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 120 | 0.1103 | 0.7127 | 0.8130 | 0.7596 | 0.9641 |
| No log | 2.0 | 240 | 0.0924 | 0.7557 | 0.8382 | 0.7948 | 0.9696 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
diallomama/wav2vec2_xlsr
|
diallomama
| 2023-06-08T08:04:41Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-05T23:38:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_xlsr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_xlsr
This model is a fine-tuned version of [diallomama/wav2vec2_xlsr](https://huggingface.co/diallomama/wav2vec2_xlsr) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2204
- eval_wer: 0.9719
- eval_runtime: 923.0808
- eval_samples_per_second: 16.346
- eval_steps_per_second: 2.044
- epoch: 1.66
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
seanghay/bookmebus-sentiment-analysis
|
seanghay
| 2023-06-08T07:44:59Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:15:42Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bookmebus-roberta-2
results: []
widget:
- text: អ្នកបើកឡានមិនគោរពច្បាប់
- text: រហ័សទាន់ចិត្ត
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bookmebus-roberta-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1144
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 249 | 1.2029 | 0.4931 |
| No log | 2.0 | 498 | 1.3241 | 0.5347 |
| 0.9919 | 3.0 | 747 | 1.3590 | 0.5 |
| 0.9919 | 4.0 | 996 | 1.4430 | 0.5139 |
| 0.7674 | 5.0 | 1245 | 1.5039 | 0.5486 |
| 0.7674 | 6.0 | 1494 | 1.6894 | 0.5208 |
| 0.5381 | 7.0 | 1743 | 1.9312 | 0.4861 |
| 0.5381 | 8.0 | 1992 | 1.9636 | 0.5208 |
| 0.3594 | 9.0 | 2241 | 2.0836 | 0.5 |
| 0.3594 | 10.0 | 2490 | 2.1144 | 0.5 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rohitp1/subh_w2lm_base_distill_noisy_teacher_mozilla_epochs_50_batch_16
|
rohitp1
| 2023-06-08T07:36:21Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-30T18:11:03Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: subh_w2lm_base_distill_noisy_teacher_mozilla_epochs_50_batch_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# subh_w2lm_base_distill_noisy_teacher_mozilla_epochs_50_batch_16
This model is a fine-tuned version of [rohitp1/ws_w2lm_base_plus_finetune_teacher_noise_mozilla_100_epochs_batch_8](https://huggingface.co/rohitp1/ws_w2lm_base_plus_finetune_teacher_noise_mozilla_100_epochs_batch_8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4070
- Wer: 0.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 4096
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1283 | 7.31 | 250 | 0.3295 | 0.3266 |
| 0.1111 | 14.63 | 500 | 0.3444 | 0.3236 |
| 0.0939 | 21.94 | 750 | 0.3664 | 0.3236 |
| 0.0826 | 29.26 | 1000 | 0.3828 | 0.3224 |
| 0.0751 | 36.57 | 1250 | 0.3977 | 0.3223 |
| 0.0703 | 43.89 | 1500 | 0.4070 | 0.3226 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
leonardPKU/PPO_Lunar
|
leonardPKU
| 2023-06-08T07:32:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T07:31:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.03 +/- 13.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
angshineee/dogs
|
angshineee
| 2023-06-08T07:25:00Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-08T07:18:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - angshineee/dogs
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
casque/hipoly_3dcg_v7-epoch-000012
|
casque
| 2023-06-08T07:21:29Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T07:20:39Z |
---
license: creativeml-openrail-m
---
|
sunil18p31a0101/q-FrozenLake-v1-4x4-noSlippery
|
sunil18p31a0101
| 2023-06-08T07:18:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T07:18:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sunil18p31a0101/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jinouga/harunosakurav3
|
Jinouga
| 2023-06-08T06:59:15Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-08T06:55:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### HarunoSakuraV3 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Jagannath/phishNet
|
Jagannath
| 2023-06-08T06:58:06Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:50:18Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: phishNet
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# phishNet
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
casque/alstroemeriaMix_v10
|
casque
| 2023-06-08T06:56:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T06:38:41Z |
---
license: creativeml-openrail-m
---
|
Jagannath/my_model
|
Jagannath
| 2023-06-08T06:54:19Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:54:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [./my_model](https://huggingface.co/./my_model) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Xoyo/ppo-CartPole-v1
|
Xoyo
| 2023-06-08T06:32:07Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T06:30:56Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 185.00 +/- 55.89
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'PPO-test'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Xoyo/ppo-CartPole-v1'
'f': '/root/.local/share/jupyter/runtime/kernel-4750a57a-4905-43ff-90d9-c956385a49c7.json'
'batch_size': 512
'minibatch_size': 128}
```
|
eunyounglee/pegasus-samsum
|
eunyounglee
| 2023-06-08T06:31:03Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T05:36:46Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6909 | 0.54 | 500 | 1.4848 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
junwai7159/ppo-LunarLander-v2
|
junwai7159
| 2023-06-08T06:26:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T06:26:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 229.94 +/- 35.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YzZ-George/DeepSpeed-Chat-OPT-1.3B-3-3-3datasets
|
YzZ-George
| 2023-06-08T06:25:27Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-08T03:31:10Z |
---
license: apache-2.0
---
We train OPT-1.3B using three datasets: Dahoas/rm-static, Dahoas/full-hh-rlhf, and yitingxie/rlhf-reward-datasets.
Dahoas/synthetic-instruct-gptj-pairwise is not used because of the adsence of test dataset.
|
seanghay/bookmebus-roberta
|
seanghay
| 2023-06-08T06:13:53Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:05:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bookmebus-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bookmebus-roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1487
- Accuracy: 0.5121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 463 | 1.2076 | 0.4587 |
| 1.2869 | 2.0 | 926 | 1.1487 | 0.5121 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
njuju/22
|
njuju
| 2023-06-08T06:05:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T06:03:47Z |
---
license: creativeml-openrail-m
---
|
Broszkit/Syazwina
|
Broszkit
| 2023-06-08T05:47:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T05:46:43Z |
---
license: creativeml-openrail-m
---
|
openaccess-ai-collective/minotaur-mpt-7b
|
openaccess-ai-collective
| 2023-06-08T05:44:02Z | 16 | 7 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"OpenAccess AI Collective",
"MPT",
"axolotl",
"custom_code",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:winglian/evals",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-06T04:09:31Z |
---
license: apache-2.0
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
inference: false
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur MPT 7B
Minotaur 7B is an instruct fine-tuned model on top of MPT-7B. Minotaur 7B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
# Training Datasets
Minotaur 7B model is fine-tuned on the following datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals)
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces coming soon.
## Release Notes
- https://wandb.ai/wing-lian/mpt-7b-4k-minotaur/runs/i4zib0j4
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 7xA100 80GB
- 3 epochs taking approximately 6 hours
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below)
## Examples - results may vary based on temperature and other settings
```
USER: your have 3 apples. you eat 2 pears. how many apples do you have left?
ASSISTANT:To find the quantity of apples, we can subtract the number of pears from the number of apples.
In this case, there are 3 apples and 2 pears. This means there are 3 apples left.
```
wut?
```
USER: your have 3 apples. you eat 2 pears. how many apples do you have left?
ASSISTANT:You don't have any apples left. You had 3 and after eating 2 pears, it is 2. You ate the other apple.
```
🤣
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: your have 3 apples. you eat 2 pears. how many apples do you have left? Use multi-step reasoning.
ASSISTANT: I understand. You have three apples, two pears, and you ate two of the pears. You have three apples left.
```
👌 looks like you need the system prompt
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Solve for x: 5x = 3x + 10
ASSISTANT: To find the value of x, you can solve this equation by first adding both sides of the equation, as follows:
5x = 3x + 10
Now, simplify the equation to get:
x = 2.5
```
yup, it still sucks at math
```
USER: tell me a joke. then explain why it's funny.
ASSISTANT: What did the blind man say to the man eating steak?
He said, "Is that a cow or a sandwich?"
```
```
USER: write a sonnet about doves
ASSISTANT:Doves are a beautiful bird, and you can write a sonnet about them to express your love for them. This sonnet can include imagery of doves, including the sound of their wings as they fly, the softness of their feathers, and the peacefulness they bring to their surroundings. The structure of this sonnet can be similar to the Shakespearean sonnets, with 14 lines divided into three groups of rhyming couplets. In the first stanza, you can express your love for doves as they are a symbol of peace and tranquility. The second stanza can describe the beauty of the dove's plumage and the softness of their feathers. The third stanza can be about the peacefulness that they bring to their surroundings, such as in a garden or park. You can end the sonnet with a final line that encapsulates your feelings about doves.
Here is the sonnet for your reference:
> Doves are a symbol of peace and tranquility,
> A gentle song that soothes the soul.
> Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings,
> And they are a welcome sight in any landscape.
>
> Their grace and beauty are a reminder of the simple pleasures in life,
> And their peaceful presence is a welcome respite from the stresses of everyday life.
>
> Doves are a symbol of love and friendship,
> And their companionship is a gift we can all cherish.
>
> Let us embrace the beauty and serenity of doves,
> And let us cherish the gift of their presence in our lives.
>
> May we always find beauty and serenity in the simple pleasures of life,
> And may we cherish the gift of friendship and love.
>
> Doves are a symbol of peace and tranquility,
> A gentle song that soothes the soul.
> Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings,
> And they are a welcome sight in any landscape.
>
> Their grace and beauty are a reminder of the simple pleasures in life,
> And their peaceful presence is a welcome respite from the stresses of everyday life.
>
> Doves are a symbol of love and friendship,
> And their companionship is a gift we can all cherish.
>
> Let us embrace the beauty and serenity of doves,
> And let us cherish the gift of their presence in our lives.
>
> May we always find beauty and serenity in the simple pleasures of life,
> And may we cherish the gift of friendship and love.
```
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
saikatkumardey/LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
|
saikatkumardey
| 2023-06-08T05:39:26Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T16:40:00Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
--- WORK IN PROGRESS ---
# LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
This model is a fine-tuned version of [MBZUAI/LaMini-Flan-T5-77M](https://huggingface.co/MBZUAI/LaMini-Flan-T5-77M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
saikatkumardey/lamini-flan-t5-783m_int8_float16
|
saikatkumardey
| 2023-06-08T05:37:42Z | 1 | 0 |
transformers
|
[
"transformers",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-30T03:35:18Z |
---
license: cc-by-nc-4.0
---
This is quantized version of the model [MBZUAI/LaMini-Flan-T5-783M](https://huggingface.co/MBZUAI/LaMini-Flan-T5-783M).
It was quantized using [CTranslate2](https://opennmt.net/CTranslate2/guides/transformers.html):
```
ct2-transformers-converter --model MBZUAI/LaMini-Flan-T5-783M --output_dir lamini-flan-t5-783m-int8_float16 --quantization int8_float16
```
# How to use it?
## Clone the model
```
git lfs install
git clone git@hf.co:saikatkumardey/lamini-flan-t5-783m_int8_float16
```
## Code example
```python
import ctranslate2
import transformers
model_dir = "lamini-flan-t5-783m_int8_float16"
translator = ctranslate2.Translator(
model_dir, compute_type="auto", inter_threads=4, intra_threads=4
)
tokenizer = transformers.AutoTokenizer.from_pretrained("MBZUAI/LaMini-Flan-T5-783M")
input_prompt = """
instruction: Restrict the answer based on the context only. Be verbose.
context: <P>This is an introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes on a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood type, country affiliation, observation time or rating via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, …) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical order of the values. Internally, the data structure consists of a categories array and an integer array of codes which point to the real value in the categories array.
The categorical data type is useful in the following cases:
- A string variable consisting of only a few different values. Converting such a string variable to a categorical variable will save some memory, see here.
- The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of the lexical order, see here.
- As a signal to other Python libraries that this column should be treated as a categorical variable (e.g. to use suitable statistical methods or plot types).</P>
question: In what cases is the categorical data type useful?
answer:
"""
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_prompt))
results = translator.translate_batch(
[input_tokens],
beam_size=1,
sampling_temperature=0.01,
max_decoding_length=1024,
batch_type="tokens",
max_batch_size=64,
)
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
|
ka13001v1/Taxi-v3
|
ka13001v1
| 2023-06-08T05:37:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T05:37:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ka13001v1/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GautamR/mbert_agri_detect
|
GautamR
| 2023-06-08T05:37:15Z | 0 | 0 |
transformers
|
[
"transformers",
"text-classification",
"en",
"dataset:GautamR/test_agri",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:26:25Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
datasets:
- GautamR/test_agri
---
|
Askinkaty/RuT5_GEC
|
Askinkaty
| 2023-06-08T05:29:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-31T11:39:22Z |
---
license: apache-2.0
language:
- ru
---
|
ka13001v1/q-FrozenLake-v1-4x4-noSlippery
|
ka13001v1
| 2023-06-08T05:25:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T05:25:28Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ka13001v1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GautamR/detect_agri
|
GautamR
| 2023-06-08T05:24:51Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-31T05:30:37Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
library_name: transformers
---
|
Tsuroko/Agustini
|
Tsuroko
| 2023-06-08T05:15:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T05:14:50Z |
---
license: creativeml-openrail-m
---
|
Tsuroko/Agustinaa
|
Tsuroko
| 2023-06-08T05:14:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T05:14:37Z |
---
license: creativeml-openrail-m
---
|
97jmlr/sd-class-butterflies-32
|
97jmlr
| 2023-06-08T05:14:32Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-06-08T05:14:21Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('97jmlr/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
hfl/chinese-llama-lora-33b
|
hfl
| 2023-06-08T05:13:56Z | 0 | 8 | null |
[
"zh",
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T09:16:09Z |
---
license: apache-2.0
language:
- zh
---
# Chinese-LLaMA-LoRA-33B
This repo contains the tokenizer, Chinese-LLaMA LoRA weights and configs for [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca.
|
dennischui/taxi_v3
|
dennischui
| 2023-06-08T04:49:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T04:33:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dennischui/taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abbymark/Reinforce-Pixelcopter-PLE-v0
|
abbymark
| 2023-06-08T04:30:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:04:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.40 +/- 25.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dennischui/q-FrozenLake-v1-4x4-noSlippery
|
dennischui
| 2023-06-08T04:28:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T04:28:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dennischui/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mfaiq2307/faiq-wav2vec2-large-xlsr-indo-demo-v100-batch32
|
mfaiq2307
| 2023-06-08T04:26:04Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-08T02:11:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: faiq-wav2vec2-large-xlsr-indo-demo-v100-batch32
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.4312037597133684
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# faiq-wav2vec2-large-xlsr-indo-demo-v100-batch32
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3723
- Wer: 0.4312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6761 | 5.84 | 400 | 2.7575 | 1.0 |
| 1.044 | 11.68 | 800 | 0.3883 | 0.5253 |
| 0.2565 | 17.52 | 1200 | 0.3655 | 0.4591 |
| 0.1725 | 23.36 | 1600 | 0.3834 | 0.4430 |
| 0.141 | 29.2 | 2000 | 0.3723 | 0.4312 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.6.1
- Tokenizers 0.13.3
|
wjungvm/distilbert-base-uncased-finetuned-emotion
|
wjungvm
| 2023-06-08T04:03:46Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T03:55:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245837586314949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8124 | 1.0 | 250 | 0.3055 | 0.91 | 0.9079 |
| 0.2446 | 2.0 | 500 | 0.2161 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wiorz/legal_bert_sm_cv_defined_summarized_4
|
wiorz
| 2023-06-08T03:52:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T03:49:38Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_sm_cv_defined_summarized_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_sm_cv_defined_summarized_4
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7595
- Accuracy: 0.811
- Precision: 0.5385
- Recall: 0.2154
- F1: 0.3077
- D-index: 1.5216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 250 | 0.4882 | 0.805 | 0.0 | 0.0 | 0.0 | 1.4370 |
| 0.5662 | 2.0 | 500 | 0.4681 | 0.805 | 0.0 | 0.0 | 0.0 | 1.4370 |
| 0.5662 | 3.0 | 750 | 0.4649 | 0.807 | 0.625 | 0.0256 | 0.0493 | 1.4491 |
| 0.4397 | 4.0 | 1000 | 0.4675 | 0.819 | 0.7692 | 0.1026 | 0.1810 | 1.4931 |
| 0.4397 | 5.0 | 1250 | 0.5234 | 0.816 | 0.7391 | 0.0872 | 0.1560 | 1.4836 |
| 0.3492 | 6.0 | 1500 | 0.5137 | 0.825 | 0.6562 | 0.2154 | 0.3243 | 1.5406 |
| 0.3492 | 7.0 | 1750 | 0.5490 | 0.81 | 0.5490 | 0.1436 | 0.2276 | 1.4952 |
| 0.2409 | 8.0 | 2000 | 0.6896 | 0.82 | 0.5882 | 0.2564 | 0.3571 | 1.5478 |
| 0.2409 | 9.0 | 2250 | 0.7600 | 0.808 | 0.5155 | 0.2564 | 0.3425 | 1.5316 |
| 0.1506 | 10.0 | 2500 | 1.0232 | 0.813 | 0.5714 | 0.1641 | 0.2550 | 1.5065 |
| 0.1506 | 11.0 | 2750 | 1.0855 | 0.823 | 0.6731 | 0.1795 | 0.2834 | 1.5255 |
| 0.0851 | 12.0 | 3000 | 1.1956 | 0.797 | 0.4655 | 0.2769 | 0.3473 | 1.5236 |
| 0.0851 | 13.0 | 3250 | 1.2379 | 0.808 | 0.5190 | 0.2103 | 0.2993 | 1.5157 |
| 0.0538 | 14.0 | 3500 | 1.4613 | 0.807 | 0.5143 | 0.1846 | 0.2717 | 1.5055 |
| 0.0538 | 15.0 | 3750 | 1.4960 | 0.815 | 0.5658 | 0.2205 | 0.3173 | 1.5288 |
| 0.0334 | 16.0 | 4000 | 1.6423 | 0.806 | 0.5067 | 0.1949 | 0.2815 | 1.5076 |
| 0.0334 | 17.0 | 4250 | 1.6386 | 0.804 | 0.4958 | 0.3026 | 0.3758 | 1.5419 |
| 0.0364 | 18.0 | 4500 | 1.6520 | 0.797 | 0.45 | 0.1846 | 0.2618 | 1.4917 |
| 0.0364 | 19.0 | 4750 | 1.6842 | 0.804 | 0.4953 | 0.2718 | 0.3510 | 1.5314 |
| 0.0167 | 20.0 | 5000 | 1.7595 | 0.811 | 0.5385 | 0.2154 | 0.3077 | 1.5216 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SEVUNX/JOY_DIFFUSION
|
SEVUNX
| 2023-06-08T03:46:53Z | 0 | 0 | null |
[
"art",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-02-28T13:37:36Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- art
- stable-diffusion
---
<center>
<b><i><font size="6"><p style="color:red">JOY DIFFUSION CHECKPOINT MERGE</p></font></i></b>
<img src="https://64.media.tumblr.com/3c2c6f40b41877ef923150a52705a14a/tumblr_mlnzf9BvWN1qg6rkio1_500.gifv" alt="">
</center>
|
ontel/Oktaaa
|
ontel
| 2023-06-08T03:09:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T03:07:49Z |
---
license: creativeml-openrail-m
---
|
jangmin/whisper-small-ko-normalized-1273h
|
jangmin
| 2023-06-08T02:46:40Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-01T10:00:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ko-normalized-1273h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ko-normalized-1273h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1426
- Wer: 0.0671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0726 | 1.0 | 6581 | 0.1490 | 0.0721 |
| 0.0368 | 2.0 | 13162 | 0.1405 | 0.0686 |
| 0.0317 | 3.0 | 19743 | 0.1426 | 0.0671 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
- ## Evaluation Result for the dataset `google/fleurs`
The trained model is evaluated on the `test` split of subset `ko_kr` from the dataset `google/fleurs`.
Please note that the model was not trained on the `train` split from the dataset.
|model|Wer|
|---|---|
|openai/whisper|0.2826|
|this model|0.2679|
|
luffycodes/parallel-roberta-large
|
luffycodes
| 2023-06-08T02:43:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:2305.13297",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-08T01:00:11Z |
---
license: apache-2.0
language:
- en
---
## Model weights for Parallel Roberta-Large model ##
We provide the [weights](https://huggingface.co/luffycodes/Parallel-Roberta-Large) for the Parallel Attention and Feedforward design (PAF) for RoBERTa-Large.
To use this model, use the following [paf_modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/paf_modeling_roberta.py) file.
## Here is how to use this model to get the features of a given text in PyTorch
```python
# use paf_modeling_roberta.py instead of modeling_roberta
from paf_modeling_roberta import RobertaModel
from transformers import RobertaTokenizer
model = RobertaModel.from_pretrained('luffycodes/parallel-roberta-large')
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Efficient GPU implementation
[gpu_paf_modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/gpu_paf_modeling_roberta.py) provides an efficient gpu implementation of PAF design for pytorch.
It clubs the computation of key, query, value, and first feedforward network sub-layer(intermediate) computation into one.
```
self.kqv_ffn1.weight.data = torch.cat((attention.self.key.weight.data, attention.self.query.weight.data,
attention.self.value.weight.data,
intermediate.dense.weight.data))
```
However, I could not efficiently optimize the second feedforward network sub-layer computation to run in parallel.
## What is Parallel Attention and Feed-Forward Design?

*On the left is the standard Series Attention and Feed-Forward Net Design (SAF) for transformers models. On the right is the Parallel Attention and Feed-Forward Net Design (PAF) used in transformer models like PaLM (Chowdhery et al., 2022) and Mesh-Transformers (Wang, 2021)*
## Evaluation results of [PAF-RoBERTa-Large](https://huggingface.co/luffycodes/parallel-roberta-large)
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 89.3 | 91.7 | 94.3 | 96.2 | 64.0 | 91.0 | 90.4 | 80.1 |
If you use this work, please cite:
Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design:
https://arxiv.org/abs/2305.13297
```
@misc{sonkar2023investigating,
title={Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design},
author={Shashank Sonkar and Richard G. Baraniuk},
year={2023},
eprint={2305.13297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
emozilla/mpt-7b-storywriter-fast
|
emozilla
| 2023-06-08T02:39:20Z | 14 | 11 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-31T16:31:13Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
---
The code for this model has been updated to include the adaptions from [Birchlabs/mosaicml-mpt-7b-chat-qlora](https://huggingface.co/Birchlabs/mosaicml-mpt-7b-chat-qlora) which allow MPT models to be loaded with `device_map="auto"` and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) support (e.g. `load_in_8bit`, `load_in_4bit`).
It also has the [latest key-value cache MPT code](https://github.com/mosaicml/llm-foundry/pull/210) to allow for fast inference with `transformers` (thus, `use_cache` is set to `True` in `config.json`).
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache 2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.update({"max_seq_len": 83968})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Community-Created Resources
These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.
- [Oobabooga Running MPT-7B-Storywriter](https://youtu.be/QVVb6Md6huA)
- [NEW MPT-7B-StoryWriter CRUSHES GPT-4!](https://www.youtube.com/watch?v=O9Y_ZdsuKWQ&t=649s) - Has a long section on running locally using Oobabooga
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
TurboPascal/Chatterbox-LLaMA-zh-base
|
TurboPascal
| 2023-06-08T02:28:55Z | 129 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T04:13:26Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
---
Llama-zh-base is an open-source project that offers a complete training pipeline for building Chinese large language models, ranging from dataset preparation to tokenization, pre-training, prompt tuning, and the reinforcement learning technique RLHF.
This is the Llama-zh-base model trained from scratch using the Chinese pretrain corpus in this project.The amount of parameters is about 0.8B.
使用120G中文语料重头开始预训练的Llama模型,旨在提供可用的中小型基础模型。重新构建了embedding层和tokenizer。目前未经过指令微调。参数量约为0.8B左右。
项目github link [Repo Links](https://github.com/enze5088/Chatterbox/blob/main/docs/model/llama-zh-base.md)
## 简介
LLama-zh-base模型是基于目前llama系列的模型架构,从头重新预训练的LLama模型。
由于llama原模型本身并未在中文语料上单独训练,词表中也并未包括太多的中文字符。
本项目重新构建了Llama的分词工具与词表。并重新初始化了对应的模型,在中文领域上的持续预训练。
## 模型内容
Chatterbox-Llama-zh系列
| 模型名称 | 模型大小 | 链接 |
| ------------------------ | -------- | ----------------------------------------------------------- |
| Chatterbox-Llama-zh-base | 0.8B | https://huggingface.co/TurboPascal/Chatterbox-LLaMA-zh-base |
| Chatterbox-Llama-zh-2b6 | 2B6 | Coming soon |
| | | |
Notes:
1. 本模型没有使用原LLaMA的权重,因此无需顾虑LLama权重协议的问题。
## 数据
预训练阶段使用开源数据与本项目爬取的部分数据。共使用约33G中文预训练数据、MC4-zh、Code数据集。清洗后筛选共120G左右数据训练1 epoch,初始学习率1e-4。未经过指令微调。
### 中文预训练数据
- 新浪新闻数据(SinaNews),220万条新闻文档数据
- 人民日报数据(People's Daily Datasets),148万条人民日报数据。
- [维基百科(wiki2019zh),100万个结构良好的中文词条](https://github.com/brightmart/nlp_chinese_corpus)
- [新闻语料(news2016zh),250万篇新闻,含关键词、描述](https://github.com/brightmart/nlp_chinese_corpus)
- [社区问答json版(webtext2019zh),410万个高质量社区问答](https://github.com/brightmart/nlp_chinese_corpus)
- [THUCNews数据(THUCNews) ,74万篇新闻文档(2.19 GB)](http://thuctc.thunlp.org/#%E4%B8%AD%E6%96%87%E6%96%87%E6%9C%AC%E5%88%86%E7%B1%BB%E6%95%B0%E6%8D%AE%E9%9B%86THUCNews)
- [评论数据-语料 (comments2019zh_corpus),240万条评论数据](https://github.com/CLUEbenchmark/CLUECorpus2020)
- [社区互动-语料 (webText2019zh_corpus),310W条社区互动数据](https://github.com/CLUEbenchmark/CLUECorpus2020)
- [科学文献数据(CSL), 约40W篇中文核心期刊文献摘要](https://github.com/ydli-ai/CSL)
- [Belle数据集](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
RadwaH/DreamBoothAgnes2
|
RadwaH
| 2023-06-08T02:17:18Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-06T18:55:23Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: sks pixar girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - RadwaH/DreamBoothAgnes2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on sks pixar girl using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
dehart20/q-FrozenLake-v1-8x8-Slippery
|
dehart20
| 2023-06-08T01:58:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:40:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dehart20/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dehart20/q-FrozenLake-v1-4x4-noSlippery
|
dehart20
| 2023-06-08T01:32:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:32:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dehart20/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Brandulio/dqn-SpaceInvadersNoFrameskip-v4
|
Brandulio
| 2023-06-08T01:25:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:24:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 131.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brandulio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brandulio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Brandulio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
wikingz/mayuyokotarealis
|
wikingz
| 2023-06-08T01:21:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T01:17:37Z |
---
license: creativeml-openrail-m
---
|
luffycodes/tutorbot-spock-bio-llama-diff
|
luffycodes
| 2023-06-08T01:19:14Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"biology",
"chatgpt",
"vicuna",
"tutorbot",
"conversation",
"dataset:luffycodes/Tutorbot-Spock-Bio-Dataset",
"arxiv:2305.13272",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-31T17:36:07Z |
---
datasets:
- luffycodes/Tutorbot-Spock-Bio-Dataset
license: apache-2.0
tags:
- biology
- chatgpt
- llama
- vicuna
- tutorbot
- conversation
---
**NOTE: This "diff model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Spock weights.
Please find the instructions here: https://github.com/luffycodes/Tutorbot-Spock-Bio.
<br>
<br>
# Spock Model Card
## Github details
Please checkout the repo: https://github.com/luffycodes/Tutorbot-Spock-Bio.
## Model details
**Model type:**
Spock is an open-source educational tutoring chatbot trained by fine-tuning LLaMA and Vicuna model on synthetic student-tutorbot conversations generated using a specialized prompt.
**Model date:**
Spock was trained between April 2023 and May 2023.
**Organizations developing the model:**
The Spock team with members from Rice University and OpenStax.
## Training dataset
700 conversations generated using a [specialized prompt](https://github.com/luffycodes/Tutorbot-Spock-Bio/blob/main/prompts/conversation_gen/v3.txt) from GPT-4.
Dataset link: https://huggingface.co/datasets/luffycodes/Tutorbot-Spock-Bio-Dataset
**Paper or resources for more information:**
https://arxiv.org/abs/2305.13272
**Code or resources for more information:**
https://github.com/luffycodes/Tutorbot-Spock-Bio
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
Shashank Sonkar (ss164@rice.edu)
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Bharath1121/distilgpt2-finetuned-wikitext2
|
Bharath1121
| 2023-06-08T00:58:16Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T00:54:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.4444 |
| No log | 2.0 | 8 | 3.4067 |
| No log | 3.0 | 12 | 3.3935 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
raghvendramall/esm2_t30_150M_UR50D-finetuned-localization
|
raghvendramall
| 2023-06-08T00:55:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T10:18:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t30_150M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t30_150M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8191
- F1: 0.7240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4436 | 1.0 | 7778 | 0.4439 | 0.7285 |
| 0.374 | 2.0 | 15556 | 0.4806 | 0.7428 |
| 0.2786 | 3.0 | 23334 | 0.8067 | 0.7243 |
| 0.1524 | 4.0 | 31112 | 1.3323 | 0.7261 |
| 0.1035 | 5.0 | 38890 | 1.3754 | 0.7227 |
| 0.0532 | 6.0 | 46668 | 1.4962 | 0.7165 |
| 0.0379 | 7.0 | 54446 | 1.5434 | 0.7173 |
| 0.0319 | 8.0 | 62224 | 1.6561 | 0.7201 |
| 0.0181 | 9.0 | 70002 | 1.7344 | 0.7259 |
| 0.0056 | 10.0 | 77780 | 1.8191 | 0.7240 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wiorz/bert_sm_cv_summarized_4
|
wiorz
| 2023-06-08T00:51:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T00:47:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_sm_cv_summarized_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sm_cv_summarized_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9996
- Accuracy: 0.802
- Precision: 0.48
- Recall: 0.1846
- F1: 0.2667
- D-index: 1.4986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 250 | 0.4713 | 0.812 | 0.5814 | 0.1282 | 0.2101 | 1.4926 |
| 0.5708 | 2.0 | 500 | 0.4584 | 0.811 | 0.5625 | 0.1385 | 0.2222 | 1.4948 |
| 0.5708 | 3.0 | 750 | 0.4557 | 0.813 | 0.5769 | 0.1538 | 0.2429 | 1.5029 |
| 0.4231 | 4.0 | 1000 | 0.4700 | 0.81 | 0.5316 | 0.2154 | 0.3066 | 1.5202 |
| 0.4231 | 5.0 | 1250 | 0.4979 | 0.812 | 0.5385 | 0.2513 | 0.3427 | 1.5353 |
| 0.3292 | 6.0 | 1500 | 0.5337 | 0.816 | 0.5647 | 0.2462 | 0.3429 | 1.5389 |
| 0.3292 | 7.0 | 1750 | 0.6282 | 0.797 | 0.4615 | 0.2462 | 0.3211 | 1.5131 |
| 0.2218 | 8.0 | 2000 | 0.7182 | 0.805 | 0.5 | 0.2513 | 0.3345 | 1.5257 |
| 0.2218 | 9.0 | 2250 | 0.8488 | 0.809 | 0.5208 | 0.2564 | 0.3436 | 1.5329 |
| 0.1478 | 10.0 | 2500 | 0.9830 | 0.809 | 0.5294 | 0.1846 | 0.2738 | 1.5082 |
| 0.1478 | 11.0 | 2750 | 1.0302 | 0.79 | 0.4419 | 0.2923 | 0.3519 | 1.5193 |
| 0.077 | 12.0 | 3000 | 1.0467 | 0.795 | 0.4658 | 0.3487 | 0.3988 | 1.5452 |
| 0.077 | 13.0 | 3250 | 1.2609 | 0.803 | 0.4931 | 0.3641 | 0.4189 | 1.5612 |
| 0.0328 | 14.0 | 3500 | 1.4127 | 0.806 | 0.5044 | 0.2923 | 0.3701 | 1.5411 |
| 0.0328 | 15.0 | 3750 | 1.6626 | 0.802 | 0.4835 | 0.2256 | 0.3077 | 1.5128 |
| 0.0189 | 16.0 | 4000 | 1.7062 | 0.81 | 0.5362 | 0.1897 | 0.2803 | 1.5113 |
| 0.0189 | 17.0 | 4250 | 1.9225 | 0.809 | 0.54 | 0.1385 | 0.2204 | 1.4921 |
| 0.0214 | 18.0 | 4500 | 1.8228 | 0.81 | 0.5269 | 0.2513 | 0.3403 | 1.5325 |
| 0.0214 | 19.0 | 4750 | 1.9544 | 0.789 | 0.4355 | 0.2769 | 0.3386 | 1.5127 |
| 0.0184 | 20.0 | 5000 | 1.9996 | 0.802 | 0.48 | 0.1846 | 0.2667 | 1.4986 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yukismd/JapaneseQuizChatbot_v1
|
yukismd
| 2023-06-08T00:48:50Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"ja",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-08T00:25:01Z |
---
language:
- ja
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)
- Training Data: [AI王 〜クイズAI日本一決定戦〜](https://sites.google.com/view/project-aio/dataset) ([Transformed dataset for training by H2O LLM Studio](https://h2oai-jpn-public.s3.amazonaws.com/sample-data/llm/JapaneseQuiz.csv))
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="yukismd/JapaneseQuizChatbot_v1",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"日本で一番高い山は富士山ですが、二番目に高い山は?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("日本で一番高い山は富士山ですが、二番目に高い山は?")["prompt_text"])
```
```bash
<|prompt|>日本で一番高い山は富士山ですが、二番目に高い山は?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yukismd/JapaneseQuizChatbot_v1",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"yukismd/JapaneseQuizChatbot_v1",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"日本で一番高い山は富士山ですが、二番目に高い山は?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yukismd/JapaneseQuizChatbot_v1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>日本で一番高い山は富士山ですが、二番目に高い山は?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(52224, 4096)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(dense): Linear(in_features=4096, out_features=4096, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=4096, out_features=52224, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=yukismd/JapaneseQuizChatbot_v1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
mmendoza/gpt-j-6B-lora-polite
|
mmendoza
| 2023-06-08T00:20:50Z | 3 | 0 |
peft
|
[
"peft",
"politeness",
"paraphrasing",
"controlled text",
"region:us"
] | null | 2023-06-07T15:44:02Z |
---
library_name: peft
tags:
- politeness
- paraphrasing
- controlled text
---
|
abbymark/Reinforce-CartPole-v1
|
abbymark
| 2023-06-08T00:15:37Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T00:15:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 480.60 +/- 58.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
edmundmills/dignity-classifier
|
edmundmills
| 2023-06-08T00:01:24Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T23:02:47Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dignity-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dignity-classifier
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5157
- Accuracy: 0.8678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7722 | 1.0 | 98 | 0.7799 | 0.6897 |
| 0.4301 | 2.0 | 196 | 0.4704 | 0.8477 |
| 0.2445 | 3.0 | 294 | 0.5107 | 0.8305 |
| 0.1626 | 4.0 | 392 | 0.5553 | 0.8477 |
| 0.0653 | 5.0 | 490 | 0.5157 | 0.8678 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cedj/w2v2-libri-10min
|
cedj
| 2023-06-07T23:43:05Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-07T15:52:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2-libri-10min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-libri-10min
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1746
- Wer: 0.5906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.0324 | 125.0 | 250 | 2.9088 | 1.0 |
| 0.5241 | 250.0 | 500 | 1.6297 | 0.6030 |
| 0.0296 | 375.0 | 750 | 1.8666 | 0.6321 |
| 0.0182 | 500.0 | 1000 | 2.0439 | 0.6473 |
| 0.0081 | 625.0 | 1250 | 2.0702 | 0.6349 |
| 0.0079 | 750.0 | 1500 | 2.1576 | 0.6335 |
| 0.0053 | 875.0 | 1750 | 1.9883 | 0.6376 |
| 0.0025 | 1000.0 | 2000 | 2.1566 | 0.6072 |
| 0.002 | 1125.0 | 2250 | 2.1848 | 0.5975 |
| 0.0015 | 1250.0 | 2500 | 2.1746 | 0.5906 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
xekri/wav2vec2-common_voice_13_0-eo-10_1
|
xekri
| 2023-06-07T23:39:37Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_13_0",
"generated_from_trainer",
"eo",
"dataset:common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-06T00:13:16Z |
---
language:
- eo
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_13_0
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-common_voice_13_0-eo-10_1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0
type: common_voice_13_0
config: eo
split: validation
args: 'Config: eo, Training split: train, Eval split: validation'
metrics:
- name: Wer
type: wer
value: 0.05342994850125446
- name: CER
type: cer
value: 0.0098
---
# wav2vec2-common_voice_13_0-eo-10_1, an Esperanto speech recognizer
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [mozilla-foundation/common_voice_13_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) Esperanto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0391
- Cer: 0.0098
- Wer: 0.0534
The first 10 examples in the evaluation set:
| Actual<br>Predicted | CER |
|:--------------------|:----|
| `la orienta parto apud benino kaj niĝerio estis nomita sklavmarbordo`<br>`la orienta parto apud benino kaj niĝerio estis nomita sklafmarbordo` | 0.014925373134328358 |
| `en la sekva jaro li ricevis premion`<br>`en la sekva jaro li ricevis premion` | 0.0 |
| `ŝi studis historion ĉe la universitato de brita kolumbio`<br>`ŝi studis historion ĉe la universitato de brita kolumbio` | 0.0 |
| `larĝaj ŝtupoj kuras al la fasado`<br>`larĝaj ŝtupoj kuras al la fasado` | 0.0 |
| `la municipo ĝuas duan epokon de etendo kaj disvolviĝo`<br>`la municipo ĝuas duan epokon de etendo kaj disvolviĝo` | 0.0 |
| `li estis ankaŭ katedrestro kaj dekano`<br>`li estis ankaŭ katedresto kaj dekano` | 0.02702702702702703 |
| `librovendejo apartenas al la muzeo`<br>`librovendejo apartenas al l muzeo` | 0.029411764705882353 |
| `ĝi estas kutime malfacile videbla kaj troviĝas en subkreskaĵaro de arbaroj`<br>`ĝi estas kutime malfacile videbla kaj troviĝas en subkreskaĵo de arbaroj` | 0.02702702702702703 |
| `unue ili estas ruĝaj poste brunaj`<br>`unue ili estas ruĝaj poste brunaj` | 0.0 |
| `la loĝantaro laboras en la proksima ĉefurbo`<br>`la loĝantaro laboras en la proksima ĉefurbo` | 0.0 |
The differences in results for the above compared to the previous model ([xekri/wav2vec2-common_voice_13_0-eo-10](https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-10)) are:
* eepokon -> epokon
* katedristo -> katedresto
* al la muzeo -> al l muzeo
## Model description
See [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). This model is a version of [xekri/wav2vec2-common_voice_13_0-eo-10](https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-10) trained for 5 more epochs.
## Intended uses & limitations
Speech recognition for Esperanto. The base model was pretrained and finetuned on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16KHz.
The output is all lowercase, no punctuation.
## Training and evaluation data
The training split was set to `train` while the eval split was set to `validation`. Some files were filtered out of the train and validation dataset due to bad data; see [xekri/wav2vec2-common_voice_13_0-eo-3](https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-3) for a detailed discussion. In summary, I used `xekri/wav2vec2-common_voice_13_0-eo-3` as a detector to detect bad files, then hardcoded those files into the trainer code to be filtered out.
## Training procedure
I used a modified version of [`run_speech_recognition_ctc.py`](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition) for training. See [`run_speech_recognition_ctc.py`](https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-10/blob/main/run_speech_recognition_ctc.py) in this repo.
The parameters to the trainer are in [train.json](https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-10/blob/main/train.json) in this repo.
The key changes between this training run and `xekri/wav2vec2-common_voice_13_0-eo-3`, aside from the filtering and use of the full training and validation sets are:
* Layer drop probability is 20%
* Train only for 5 epochs
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- layerdrop: 0.2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.1142 | 0.22 | 1000 | 0.0483 | 0.0126 | 0.0707 |
| 0.1049 | 0.44 | 2000 | 0.0474 | 0.0123 | 0.0675 |
| 0.0982 | 0.67 | 3000 | 0.0471 | 0.0120 | 0.0664 |
| 0.092 | 0.89 | 4000 | 0.0459 | 0.0117 | 0.0640 |
| 0.0847 | 1.11 | 5000 | 0.0459 | 0.0115 | 0.0631 |
| 0.0837 | 1.33 | 6000 | 0.0453 | 0.0113 | 0.0624 |
| 0.0803 | 1.56 | 7000 | 0.0443 | 0.0109 | 0.0598 |
| 0.0826 | 1.78 | 8000 | 0.0441 | 0.0110 | 0.0604 |
| 0.0809 | 2.0 | 9000 | 0.0437 | 0.0110 | 0.0605 |
| 0.0728 | 2.22 | 10000 | 0.0451 | 0.0109 | 0.0597 |
| 0.0707 | 2.45 | 11000 | 0.0444 | 0.0108 | 0.0591 |
| 0.0698 | 2.67 | 12000 | 0.0442 | 0.0105 | 0.0576 |
| 0.0981 | 2.89 | 13000 | 0.0411 | 0.0104 | 0.0572 |
| 0.0928 | 3.11 | 14000 | 0.0413 | 0.0102 | 0.0561 |
| 0.0927 | 3.34 | 15000 | 0.0410 | 0.0102 | 0.0565 |
| 0.0886 | 3.56 | 16000 | 0.0402 | 0.0102 | 0.0558 |
| 0.091 | 3.78 | 17000 | 0.0400 | 0.0101 | 0.0553 |
| 0.0888 | 4.0 | 18000 | 0.0398 | 0.0100 | 0.0546 |
| 0.0885 | 4.23 | 19000 | 0.0395 | 0.0099 | 0.0542 |
| 0.0869 | 4.45 | 20000 | 0.0394 | 0.0099 | 0.0540 |
| 0.0844 | 4.67 | 21000 | 0.0393 | 0.0098 | 0.0539 |
| 0.0882 | 4.89 | 22000 | 0.0391 | 0.0098 | 0.0537 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
andymunoz20/vit-base-patch16-224-finetuned-flower
|
andymunoz20
| 2023-06-07T23:16:03Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-07T23:11:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
LisanneH/whisper-small-nl-Synthetic_2
|
LisanneH
| 2023-06-07T23:11:23Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-07T20:42:52Z |
---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- /content/drive/MyDrive/MAAI/Synthetische_data
metrics:
- wer
model-index:
- name: Whisper Small nl Synthetic
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: /content/drive/MyDrive/MAAI/Synthetische_data
name: Synthetic Speech
config: nl
split: train
metrics:
- type: wer
value: 20.90
name: WER
---
|
tingtone/jq_emo_gpt
|
tingtone
| 2023-06-07T23:10:32Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-02T05:04:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: jq_emo_gpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jq_emo_gpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2536
- Accuracy: 0.947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6400
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5109 | 1.0 | 16000 | 0.5014 | 0.929 |
| 0.3765 | 2.0 | 32000 | 0.3135 | 0.9385 |
| 0.2526 | 3.0 | 48000 | 0.2385 | 0.945 |
| 0.1952 | 4.0 | 64000 | 0.2536 | 0.947 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HasinMDG/Topic_mpnet_L2
|
HasinMDG
| 2023-06-07T23:01:08Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-07T23:00:43Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/Topic_mpnet_L2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/Topic_mpnet_L2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.