modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 18:27:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Zekunli/flan-t5-large-extraction-all-cnndm_4000-ep5-nonstop
|
Zekunli
| 2023-05-12T01:04:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-12T00:38:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large-extraction-all-cnndm_4000-ep5-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-extraction-all-cnndm_4000-ep5-nonstop
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7363
- Hint Hit Num: 1.936
- Hint Precision: 0.3338
- Num: 5.818
- Gen Len: 18.99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 40
- eval_batch_size: 80
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hint Hit Num | Hint Precision | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:--------------:|:-----:|:-------:|
| 2.1988 | 1.0 | 100 | 1.8020 | 1.852 | 0.3214 | 5.78 | 19.0 |
| 1.9385 | 2.0 | 200 | 1.7482 | 1.974 | 0.3426 | 5.796 | 18.986 |
| 1.8744 | 3.0 | 300 | 1.7407 | 1.976 | 0.3399 | 5.86 | 18.99 |
| 1.8422 | 4.0 | 400 | 1.7398 | 1.958 | 0.3382 | 5.816 | 18.99 |
| 1.8238 | 5.0 | 500 | 1.7363 | 1.936 | 0.3338 | 5.818 | 18.99 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
muhammadravi251001/fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-12T01:01:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-06T18:53:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2708
- Exact Match: 52.7487
- F1: 60.8071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.4745 | 0.49 | 36 | 2.5724 | 35.6021 | 37.8405 |
| 3.5197 | 0.98 | 72 | 1.9912 | 28.0105 | 35.4278 |
| 2.1756 | 1.48 | 108 | 1.6669 | 35.7330 | 43.0612 |
| 2.1756 | 1.97 | 144 | 1.5047 | 39.3979 | 46.1664 |
| 1.6725 | 2.46 | 180 | 1.3222 | 45.9424 | 52.9355 |
| 1.336 | 2.95 | 216 | 1.3205 | 44.1099 | 51.6851 |
| 1.176 | 3.45 | 252 | 1.2526 | 47.5131 | 55.3298 |
| 1.176 | 3.94 | 288 | 1.2778 | 47.3822 | 54.7110 |
| 1.1089 | 4.44 | 324 | 1.2291 | 49.8691 | 57.2303 |
| 0.967 | 4.93 | 360 | 1.1944 | 52.4869 | 60.2202 |
| 0.967 | 5.42 | 396 | 1.2122 | 53.7958 | 61.3033 |
| 0.9202 | 5.91 | 432 | 1.2348 | 54.0576 | 61.6263 |
| 0.8719 | 6.41 | 468 | 1.2206 | 55.2356 | 62.9267 |
| 0.8205 | 6.9 | 504 | 1.2472 | 53.9267 | 61.6359 |
| 0.8205 | 7.4 | 540 | 1.2764 | 52.3560 | 60.2681 |
| 0.7907 | 7.89 | 576 | 1.2382 | 55.3665 | 63.0145 |
| 0.7533 | 8.38 | 612 | 1.2812 | 52.4869 | 60.4214 |
| 0.7533 | 8.87 | 648 | 1.2474 | 53.1414 | 60.6338 |
| 0.7345 | 9.37 | 684 | 1.2708 | 52.7487 | 60.8071 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
DunnBC22/fBERT-Hate_Offensive_or_Normal_Speech
|
DunnBC22
| 2023-05-12T00:36:49Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-03T20:56:07Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: fBERT-Hate_Offensive_or_Normal_Speech
results: []
language:
- en
pipeline_tag: text-classification
---
# fBERT-Hate_Offensive_or_Normal_Speech
This model is a fine-tuned version of [diptanu/fBERT](https://huggingface.co/diptanu/fBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1282
- Accuracy: 0.9607
- Weighted f1: 0.9605
- Micro f1: 0.9607
- Macro f1: 0.9581
- Weighted recall: 0.9607
- Micro recall: 0.9607
- Macro recall: 0.9571
- Weighted precision: 0.9609
- Micro precision: 0.9607
- Macro precision: 0.9596
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Multiclass%20Classification/Transformer%20Comparison/Hate%20%26%20Offensive%20Speech%20-%20fBERT.ipynb
### Associated Models
This project is part of a comparison that included the following models:
- https://huggingface.co/DunnBC22/bert-large-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/bert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/hateBERT-Hate_Offensive_or_Normal_Speech
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/subhajournal/normal-hate-and-offensive-speeches
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.8316 | 1.0 | 39 | 0.5146 | 0.6754 | 0.5655 | 0.6754 | 0.5312 | 0.6754 | 0.6754 | 0.6324 | 0.4902 | 0.6754 | 0.4616 |
| 0.3628 | 2.0 | 78 | 0.2042 | 0.8820 | 0.8786 | 0.8820 | 0.8706 | 0.8820 | 0.8820 | 0.8685 | 0.8930 | 0.8820 | 0.8922 |
| 0.1767 | 3.0 | 117 | 0.1282 | 0.9607 | 0.9605 | 0.9607 | 0.9581 | 0.9607 | 0.9607 | 0.9571 | 0.9609 | 0.9607 | 0.9596 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.12.1
|
Mizuiro-sakura/luke-japanese-base-finetuned-ner
|
Mizuiro-sakura
| 2023-05-12T00:36:17Z | 687 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"luke",
"token-classification",
"ner",
"固有表現抽出",
"named entity recognition",
"named-entity-recognition",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-17T23:36:52Z |
---
license: mit
language: ja
tags:
- luke
- pytorch
- transformers
- ner
- 固有表現抽出
- named entity recognition
- named-entity-recognition
---
# このモデルはluke-japanese-baseをファインチューニングして、固有表現抽出(NER)に用いれるようにしたものです。
このモデルはluke-japanese-baseを
Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
固有表現抽出(NER)タスクに用いることができます。
# This model is fine-tuned model for Named-Entity-Recognition(NER) which is based on luke-japanese-base
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# モデルの精度 accuracy of model
|| precision |recall | f1-score | support|
|---|----|----|----|----|
|その他の組織名 | 0.76 | 0.77 | 0.77 | 238|
|イベント名 |0.83 |0.90 | 0.87 |215|
|人名 |0.88 |0.91 | 0.90 | 546|
|地名 | 0.84 | 0.83 |0.83 | 440|
|政治的組織名 | 0.80 |0.84 | 0.82 | 263|
|施設名 | 0.78 | 0.83 | 0.80 | 241|
|法人名 | 0.88 | 0.90 | 0.89 | 487|
|製品名 | 0.74 | 0.80 |0.77 | 252|
|micro avg |0.83 |0.86 | 0.84 | 2682|
|macro avg | 0.81 | 0.85 | 0.83 | 2682|
|weighted avg |0.83 | 0.86 | 0.84 | 2682|
# How to use 使い方
sentencepieceとtransformersをインストールして (pip install sentencepiece , pip install transformers)
以下のコードを実行することで、NERタスクを解かせることができます。
please execute this code.
```python
from transformers import MLukeTokenizer,pipeline, LukeForTokenClassification
tokenizer = MLukeTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-ner')
model=LukeForTokenClassification.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
ToddGoldfarb/Cadet-Tiny
|
ToddGoldfarb
| 2023-05-12T00:18:41Z | 5,677 | 5 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"conversational",
"en",
"dataset:allenai/soda",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-07T06:34:12Z |
---
license: openrail
datasets:
- allenai/soda
language:
- en
pipeline_tag: conversational
---
# What is Cadet-Tiny?
Inspired by Allen AI's **Cosmo-XL**, **Cadet-Tiny** is a _very small_ conversational model trained off of the **SODA** dataset. **Cadet-Tiny** is intended for inference at the edge (on something as small as a 2GB RAM Raspberry Pi).
**Cadet-Tiny** is trained off of the **t5-small** pretrained model from Google, and is, as a result, is about 2% of the size of the **Cosmo-3B** model.
This is my first SEQ2SEQ NLP Model I've ever made! I'm very excited to share it here on HuggingFace! :)
If you have any questions, or any comments on improvements, please contact me at: **tcgoldfarb@gmail.com**
# Google Colab Link
Here is the link to the Google Colab file, where I walk through the process of training the model and using the SODA public dataset from AI2.
https://colab.research.google.com/drive/1cx3Yujr_jGQkseqzXZW-2L0vEyEjds_s?usp=sharing
# Get Started With Cadet-Tiny
Use the code snippet below to get started with Cadet-Tiny!
```
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import colorful as cf
cf.use_true_colors()
cf.use_style('monokai')
class CadetTinyAgent:
def __init__(self):
print(cf.bold | cf.purple("Waking up Cadet-Tiny..."))
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.tokenizer = AutoTokenizer.from_pretrained("t5-small", model_max_length=512)
self.model = AutoModelForSeq2SeqLM.from_pretrained("ToddGoldfarb/Cadet-Tiny", low_cpu_mem_usage=True).to(self.device)
self.conversation_history = ""
def observe(self, observation):
self.conversation_history = self.conversation_history + observation
# The number 400 below is just a truncation safety net. It leaves room for 112 input tokens.
if len(self.conversation_history) > 400:
self.conversation_history = self.conversation_history[112:]
def set_input(self, situation_narrative="", role_instruction=""):
input_text = "dialogue: "
if situation_narrative != "":
input_text = input_text + situation_narrative
if role_instruction != "":
input_text = input_text + " <SEP> " + role_instruction
input_text = input_text + " <TURN> " + self.conversation_history
# Uncomment the line below to see what is fed to the model.
# print(input_text)
return input_text
def generate(self, situation_narrative, role_instruction, user_response):
user_response = user_response + " <TURN> "
self.observe(user_response)
input_text = self.set_input(situation_narrative, role_instruction)
inputs = self.tokenizer([input_text], return_tensors="pt").to(self.device)
# I encourage you to change the hyperparameters of the model! Start by trying to modify the temperature.
outputs = self.model.generate(inputs["input_ids"], max_new_tokens=512, temperature=0.75, top_p=.95,
do_sample=True)
cadet_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
added_turn = cadet_response + " <TURN> "
self.observe(added_turn)
return cadet_response
def reset_history(self):
self.conversation_history = []
def run(self):
def get_valid_input(prompt, default):
while True:
user_input = input(prompt)
if user_input in ["Y", "N", "y", "n"]:
return user_input
if user_input == "":
return default
while True:
continue_chat = ""
# MODIFY THESE STRINGS TO YOUR LIKING :)
situation_narrative = "Imagine you are Cadet-Tiny talking to ???."
role_instruction = "You are Cadet-Tiny, and you are talking to ???."
self.chat(situation_narrative, role_instruction)
continue_chat = get_valid_input(cf.purple("Start a new conversation with new setup? [Y/N]:"), "Y")
if continue_chat in ["N", "n"]:
break
print(cf.blue("CT: See you!"))
def chat(self, situation_narrative, role_instruction):
print(cf.green(
"Cadet-Tiny is running! Input [RESET] to reset the conversation history and [END] to end the conversation."))
while True:
user_input = input("You: ")
if user_input == "[RESET]":
self.reset_history()
print(cf.green("[Conversation history cleared. Chat with Cadet-Tiny!]"))
continue
if user_input == "[END]":
break
response = self.generate(situation_narrative, role_instruction, user_input)
print(cf.blue("CT: " + response))
def main():
print(cf.bold | cf.blue("LOADING MODEL"))
CadetTiny = CadetTinyAgent()
CadetTiny.run()
if __name__ == '__main__':
main()
```
# Citations and Special Thanks
Special thanks to Hyunwoo Kim for discussing with me the best way to use the SODA dataset. If you haven't looked into their work with SODA, Prosocial-Dialog, or COSMO, I recommend you do so! As well, read the paper on SODA!
The article is listed below.
```
@article{kim2022soda,
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2212.10465}
}
```
|
sasha0552/pygmalion-7b-q8_0-ggml
|
sasha0552
| 2023-05-12T00:00:04Z | 0 | 2 | null |
[
"text generation",
"conversational",
"en",
"license:other",
"region:us"
] | null | 2023-05-11T22:40:20Z |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q8_0 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q8_0.
|
Lykon/NeverEnding-Dream
|
Lykon
| 2023-05-11T23:43:42Z | 330 | 162 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"art",
"artistic",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-19T17:54:51Z |
---
language:
- en
license: other
tags:
- stable-diffusion
- text-to-image
- art
- artistic
- diffusers
inference: false
---
# NeverEnding Dream (NED)
## Official Repository
Read more about this model here: https://civitai.com/models/10028/neverending-dream-ned
Also please support by giving 5 stars and a heart, which will notify new updates.
Also consider supporting me on Patreon or ByuMeACoffee
- https://www.patreon.com/Lykon275
You can run this model on:
- https://sinkin.ai/m/qGdxrYG
Some sample output:






|
DunnBC22/distilbert-base-uncased-Regression-Edmunds_Car_Reviews-all_car_brands
|
DunnBC22
| 2023-05-11T23:35:06Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-27T04:43:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-Regression-Edmunds_Car_Reviews-all_car_brands
results: []
language:
- en
---
# distilbert-base-uncased-Regression-Edmunds_Car_Reviews-all_car_brands
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2232
- Mse: 0.2232
- Rmse: 0.4724
- Mae: 0.3150
## Model description
This project works to predict the rating of a car based on the review for all automanufacturers.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/NLP%20Regression/Edmunds%20Car%20Reviews%20(All%20Brands)/Edmunds_Consumer_car-Regression-All%20Manufacturers.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.3936 | 1.0 | 2592 | 0.2282 | 0.2282 | 0.4777 | 0.3158 |
| 0.2163 | 2.0 | 5184 | 0.2160 | 0.2160 | 0.4647 | 0.3106 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DunnBC22/distilbert-base-uncased-Regression-Edmunds_Car_Reviews-American_Made
|
DunnBC22
| 2023-05-11T23:33:03Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-20T15:58:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-Regression-Edmunds_Car_Reviews-American_Made
results: []
language:
- en
---
# distilbert-base-uncased-Regression-Edmunds_Car_Reviews-American_Made
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2486
- Mae: 0.3469
- Mse: 0.2486
- Rmse: 0.4986
## Model description
This project works to predict the rating of a car based on the review (only American-headquartered automanufacturers).
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/NLP%20Regression/HF-Edmunds_Consumer_car-Regression-American.ipynb
## Intended uses & limitations
I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for
their contributions that have this possible. I am not too worried about getting credit for my part, but
make sure to properly cite the authors of the different technologies and dataset(s) as they absolutely
deserve credit for their contributions.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews
I only used car manufacturers headquartered in America that are not luxury brands.
Additionally, I removed manufacturers with limited reviews.
## Training procedure
The script for this project will be uploaded to my GitHub profile soon.
Once it is, I will make sure to add the link here.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.6385 | 1.0 | 777 | 0.2743 | 0.3633 | 0.2743 | 0.5237 |
| 0.2551 | 2.0 | 1554 | 0.2588 | 0.3536 | 0.2588 | 0.5088 |
| 0.2161 | 3.0 | 2331 | 0.2568 | 0.3508 | 0.2568 | 0.5068 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
|
DunnBC22/distilbert-base-uncased-Regression-Edmunds_Car_Reviews-European_Made
|
DunnBC22
| 2023-05-11T23:32:44Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-17T20:27:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-Regression-Edmunds_Car_Reviews-European_Made
results: []
language:
- en
---
# distilbert-base-uncased-Regression-Edmunds_Car_Reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Mse: 0.1999
- Rmse: 0.4471
- Mae: 0.2824
## Model description
This project works to predict the rating of a car based on the review (only vehicles from European-headquartered automanufacturers).
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/NLP%20Regression/HF-Edmunds_Consumer_car-Regression-European.ipynb
## Intended uses & limitations
I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for
their contributions that have this possible. I am not too worried about getting credit for my part, but make
sure to properly cite the authors of the different technologies and dataset(s) as they
absolutely deserve credit for their contributions.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews
I only used car manufacturers headquartered in Europe that are not luxury brands.
Additionally, I removed manufacturers with limited reviews.
## Training procedure
The script for this project will be uploaded to my GitHub profile soon. Once it is, I will make sure to add the link here.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.4892 | 1.0 | 236 | 0.2587 | 0.2587 | 0.5086 | 0.3120 |
| 0.2384 | 2.0 | 472 | 0.2359 | 0.2359 | 0.4857 | 0.2994 |
| 0.188 | 3.0 | 708 | 0.2304 | 0.2304 | 0.4800 | 0.2948 |
| 0.1558 | 4.0 | 944 | 0.2443 | 0.2443 | 0.4942 | 0.2981 |
| 0.133 | 5.0 | 1180 | 0.2410 | 0.2410 | 0.4909 | 0.2983 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
|
DunnBC22/distilbert-base-uncased-Regression-Simpsons_Plus_Others
|
DunnBC22
| 2023-05-11T23:31:00Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T03:07:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-Regression-Simpsons_Plus_Others
results: []
language:
- en
---
# distilbert-base-uncased-Regression-Simpsons_Plus_Others
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3754
- Mse: 0.3754
- Rmse: 0.6127
- Mae: 0.4651
## Model description
This project works to predict the rating of episodes for the following TV shows:
- The Simpsons
- Brooklyn Nine Nine
- Seinfeld
- The Big Bang Theory
- 30 Rock
- Community
- Parks and Recreation
- The Office
- How I Met Your Mother
- Modern Family
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/NLP%20Regression/NLP%20Regression%20-%20Simpsons%20Plus%20Other%20Series.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Data Sources:
- https://www.kaggle.com/datasets/mattbarty/the-simpsons-s1s32-imdb-scores-episode-info
- https://www.kaggle.com/datasets/maddyramsey/brookyln-nine-nine-imdb-ratings
- https://www.kaggle.com/datasets/hod101s/seinfeld-imdb-ratings
- https://www.kaggle.com/datasets/bcruise/big-bang-theory-episodes?select=big_bang_theory_imdb.csv
- https://www.kaggle.com/datasets/bcruise/30-rock-episode-data?select=30_rock_imdb.csv
- https://www.kaggle.com/datasets/imbenab/community-episodes-imdb-ratings
- https://www.kaggle.com/datasets/bcruise/parks-and-recreation-episode-data?select=parks_and_rec_imdb.csv
- https://www.kaggle.com/datasets/kapastor/the-office-imdb-ratings-per-episode
- https://www.kaggle.com/datasets/bcruise/how-i-met-your-mother-episodes-data?select=himym_imdb.csv
- https://www.kaggle.com/datasets/rprkh15/modern-family-dataset
Also, I pulled the episode description and rating from IMDb for the following TV shows:
- Two and a Half Men
- Young Sheldon
- Married... With Children
- Family Guy
- South Park
- That '70s Show
- It's Always Sunny in Philadelphia
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 29.5977 | 1.0 | 51 | 7.9215 | 7.9215 | 2.8145 | 2.7032 |
| 4.4551 | 2.0 | 102 | 0.6728 | 0.6728 | 0.8202 | 0.6039 |
| 2.0068 | 3.0 | 153 | 0.6034 | 0.6034 | 0.7768 | 0.5882 |
| 1.8734 | 4.0 | 204 | 0.4423 | 0.4423 | 0.6651 | 0.4975 |
| 1.7607 | 5.0 | 255 | 0.3971 | 0.3971 | 0.6302 | 0.4725 |
| 1.6901 | 6.0 | 306 | 0.4005 | 0.4005 | 0.6328 | 0.4751 |
| 1.6525 | 7.0 | 357 | 0.4001 | 0.4001 | 0.6325 | 0.4766 |
| 1.6103 | 8.0 | 408 | 0.4278 | 0.4278 | 0.6541 | 0.4954 |
| 1.5659 | 9.0 | 459 | 0.3903 | 0.3903 | 0.6247 | 0.4618 |
| 1.4968 | 10.0 | 510 | 0.3987 | 0.3987 | 0.6314 | 0.4670 |
| 1.4983 | 11.0 | 561 | 0.4764 | 0.4764 | 0.6902 | 0.5324 |
| 1.4659 | 12.0 | 612 | 0.3913 | 0.3913 | 0.6256 | 0.4616 |
| 1.4532 | 13.0 | 663 | 0.4511 | 0.4511 | 0.6716 | 0.5153 |
| 1.4515 | 14.0 | 714 | 0.4009 | 0.4009 | 0.6332 | 0.4768 |
| 1.4506 | 15.0 | 765 | 0.4588 | 0.4588 | 0.6773 | 0.5160 |
| 1.4249 | 16.0 | 816 | 0.3940 | 0.3940 | 0.6277 | 0.4630 |
| 1.4254 | 17.0 | 867 | 0.4456 | 0.4456 | 0.6675 | 0.5084 |
| 1.4023 | 18.0 | 918 | 0.4517 | 0.4517 | 0.6721 | 0.5096 |
| 1.3754 | 19.0 | 969 | 0.4210 | 0.4210 | 0.6489 | 0.4869 |
| 1.3865 | 20.0 | 1020 | 0.4163 | 0.4163 | 0.6452 | 0.4830 |
| 1.3802 | 21.0 | 1071 | 0.4290 | 0.4290 | 0.6550 | 0.4904 |
| 1.4087 | 22.0 | 1122 | 0.4097 | 0.4097 | 0.6401 | 0.4745 |
| 1.3855 | 23.0 | 1173 | 0.4438 | 0.4438 | 0.6662 | 0.5027 |
| 1.3911 | 24.0 | 1224 | 0.4302 | 0.4302 | 0.6559 | 0.4906 |
| 1.3877 | 25.0 | 1275 | 0.4287 | 0.4287 | 0.6547 | 0.4887 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sasha0552/pygmalion-7b-q4_0-ggml
|
sasha0552
| 2023-05-11T23:16:39Z | 0 | 1 | null |
[
"text generation",
"conversational",
"en",
"license:other",
"region:us"
] | null | 2023-05-11T22:40:09Z |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q4_0 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q4_0.
|
KizukiAi/Kizuki-Anime-Civitai-v2
|
KizukiAi
| 2023-05-11T22:36:40Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T22:18:10Z |
---
license: creativeml-openrail-m
---
|
Habuki/kanata-konoe-so-vits-svc-model
|
Habuki
| 2023-05-11T21:59:11Z | 12 | 6 |
transformers
|
[
"transformers",
"music",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2023-05-11T18:39:53Z |
---
license: creativeml-openrail-m
tags:
- music
---
<div align="center">
<h1>sovits4.0 Model</h1>
<img src="https://static.zerochan.net/Konoe.Kanata.full.3012444.jpg" height="200" alt="emu">
<h1>the Model is</h1>
<h1>Kanata Konoe (CV : Akari Kito) from Love Live! Nijigasaki</h1>
|
Hackerino/finetuning-sentiment-model-3000-samples
|
Hackerino
| 2023-05-11T21:30:38Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-29T11:18:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8599999999999999
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3201
- Accuracy: 0.86
- F1: 0.8600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
DunnBC22/bert-large-uncased-Hate_Offensive_or_Normal_Speech
|
DunnBC22
| 2023-05-11T21:28:37Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-16T05:08:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-uncased-Hate_Offensive_or_Normal_Speech
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-Hate_Offensive_or_Normal_Speech
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0443
- Accuracy: 0.9869
- Weighted f1: 0.9869
- Micro f1: 0.9869
- Macro f1: 0.9863
- Weighted recall: 0.9869
- Micro recall: 0.9869
- Macro recall: 0.9857
- Weighted precision: 0.9869
- Micro precision: 0.9869
- Macro precision: 0.9870
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Multiclass%20Classification/Transformer%20Comparison/Hate%20%26%20Offensive%20Speech%20-%20BERT-Large.ipynb
### Associated Models
This project is part of a comparison that included the following models:
- https://huggingface.co/DunnBC22/bert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/fBERT-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/hateBERT-Hate_Offensive_or_Normal_Speech
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/subhajournal/normal-hate-and-offensive-speeches
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.7991 | 1.0 | 39 | 0.4235 | 0.7430 | 0.7100 | 0.7430 | 0.6902 | 0.7430 | 0.7430 | 0.7049 | 0.7782 | 0.7430 | 0.7886 |
| 0.2156 | 2.0 | 78 | 0.1072 | 0.9607 | 0.9605 | 0.9607 | 0.9585 | 0.9607 | 0.9607 | 0.9569 | 0.9607 | 0.9607 | 0.9605 |
| 0.0518 | 3.0 | 117 | 0.0518 | 0.9869 | 0.9869 | 0.9869 | 0.9863 | 0.9869 | 0.9869 | 0.9857 | 0.9869 | 0.9869 | 0.9870 |
| 0.0242 | 4.0 | 156 | 0.0500 | 0.9853 | 0.9852 | 0.9853 | 0.9845 | 0.9853 | 0.9853 | 0.9841 | 0.9853 | 0.9853 | 0.9850 |
| 0.0163 | 5.0 | 195 | 0.0443 | 0.9869 | 0.9869 | 0.9869 | 0.9863 | 0.9869 | 0.9869 | 0.9857 | 0.9869 | 0.9869 | 0.9870 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
bigscience/bloom-1b7
|
bigscience
| 2023-05-11T21:17:30Z | 39,825 | 121 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-19T11:52:06Z |
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-1b7
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks, and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
- [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,722,408,960 parameters:
* 513,802,240 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 2048-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
* 4 GPUs per node
* 40 CPUs per task
* 1 task per node
* CPU: AMD
* CPU memory: 160GB per node
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
- Checkpoint size:
- Fp16 weights: 2.6GB (# params * 2)
- Full checkpoint with optimizer states: --
- Training throughput: --
- Number of epochs: 1
- Dates:
- Start: 11th March, 2022 11:42am PST
- End: 20 May, 2022
- Server training location: Île-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** bigscience-contact@googlegroups.com
|
alikanakar/bert-base-multilingual-cased-0_8-finetuned-squad
|
alikanakar
| 2023-05-11T21:16:50Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-11T15:43:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-0_8-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-0_8-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3359 | 1.0 | 3960 | 0.9555 |
| 0.9686 | 2.0 | 7920 | 0.6868 |
| 0.721 | 3.0 | 11880 | 0.5706 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
gaussalgo/T5-LM-Large_Canard-Fullwiki-HotpotQA-rephrase
|
gaussalgo
| 2023-05-11T21:09:19Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:gaussalgo/Canard_Wiki-augmented",
"dataset:hotpot_qa",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-16T20:12:25Z |
---
datasets:
- gaussalgo/Canard_Wiki-augmented
- hotpot_qa
metrics:
- rouge
- bleu
model-index:
- name: T5-LM-Large_Canard-Fullwiki-HotpotQA-rephrase
results:
- task:
type: question-answering
name: Question Answering
dataset:
type: hotpot_qa
name: HotpotQA
split: validation
metrics:
- type: rouge
value: 0.4774
- type: bleu
value: 29.11
- task:
type: question-answering
name: Question Answering
dataset:
type: gaussalgo/Canard_Wiki-augmented
name: Wikipedia-augmented Conversational QA (Canard)
split: validation
metrics:
- type: rouge
value: 0.4377
- type: bleu
value: 19.34
license: cc-by-sa-4.0
language:
- en
---
# Model Card for T5-LM-Large_Canard-HotpotQA-rephrase
This model is trained on three objectives:
1. Generating answers for Canard dataset based on Wikipedia search results
2. Generating answers for HotpotQA,
3. Rephrasing questions by the conversation context.
## Training
The model was trained using the following script, which can be copy-pasted and run as-is (with the installed `requirements.txt`).
All details, including the request format, can be inferred without errors from the code.
The best checkpoint was picked by a maximum ROUGE on Canard conversational QA's ROUGE.
```python
import datasets
canard_train_augm = datasets.load_dataset("gaussalgo/Canard_Wiki-augmented", split="train")
canard_test_augm = datasets.load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
canard_df = canard_train_augm.to_pandas()
canard_test_df = canard_train_augm.to_pandas()
### Curation of seq2seq input contexts and labels
import random
def input_context_from_sample(row: dict, max_length=5) -> str:
context = "Previous conversation:"
context += "\nQuestion: "
context += ", ".join(row["History"][:3])
for i in range(3, len(row["History"]), 2):
context += "\nAnswer: "
context += row["History"][i]
if i+1 < len(row["History"]):
context += "\nQuestion: "
context += row["History"][i+1]
context += "\n\nCurrent Question: "
context += row["Question"]
context += "\nSearch results:"
all_contexts = row["retrieved_contexts"].tolist()[:max_length-1] + [row["true_contexts"]]
random.shuffle(all_contexts)
for i, search_result in enumerate(all_contexts):
context += "\n[%s]: " % (i+1)
context += search_result.replace("CANNOTANSWER", "")
context += "\nCurrent Answer: "
return context
def rephrasing_context_from_sample(row: dict) -> str:
context = "Previous conversation:"
context += "\nQuestion: "
context += ", ".join(row["History"][:3])
for i in range(3, len(row["History"]), 2):
context += "\nAnswer: "
context += row["History"][i]
if i+1 < len(row["History"]):
context += "\nQuestion: "
context += row["History"][i+1]
context += "\n\nCurrent Question: "
context += row["Question"]
context += "\nMore specific question: "
return context
def hotpotqa_context(row: dict) -> str:
context = "Current Question: "
context += row["question"]
context += "\nSearch results:"
all_contexts = [" ".join(context) for context in row["context"]["sentences"]]
for i, search_result in enumerate(all_contexts):
context += "\n[%s]: " % (i+1)
context += search_result.replace("CANNOTANSWER", "")
context += "\nCurrent Answer: "
return context
# Conversational QA sequences
input_texts = canard_df.apply(lambda row: input_context_from_sample(row), axis=1).values
input_val_texts = canard_test_df.iloc[:200].apply(lambda row: input_context_from_sample(row), axis=1).values
too_long_index = [len(t) > 20000 for t in input_texts]
input_texts = [t for i, t in enumerate(input_texts) if not too_long_index[i]]
# print(too_long_index)
print("training on %s samples" % len(input_texts))
labels = canard_df.answer.apply(lambda ans: "No answer" if ans == "CANNOTANSWER" else ans).values
labels = [l for i, l in enumerate(labels) if not too_long_index[i]]
val_labels = canard_test_df.answer.apply(lambda ans: "No answer" if ans == "CANNOTANSWER" else ans).values
# Rephrasing sequences
rephrasing_inputs = canard_df.apply(lambda row: rephrasing_context_from_sample(row), axis=1).values
rephrasing_val_inputs = canard_test_df.apply(lambda row: rephrasing_context_from_sample(row), axis=1).values
rephrasing_labels = canard_df.Rewrite.values
rephrasing_val_labels = canard_test_df.Rewrite.values
# HotpotQA sequences
hotpot_train = datasets.load_dataset("hotpot_qa", "distractor")["train"]
hotpot_val = datasets.load_dataset("hotpot_qa", "distractor")["validation"]
hotpot_inputs = hotpot_train.to_pandas().apply(hotpotqa_context, axis=1)
hotpot_val_inputs = hotpot_val.to_pandas().apply(hotpotqa_context, axis=1)
too_long_index = [len(t) > 20000 for t in hotpot_inputs]
hotpot_inputs = [t for i, t in enumerate(hotpot_inputs) if not too_long_index[i]]
hotpot_answers = [t for i, t in enumerate(hotpot_train["answer"]) if not too_long_index[i]]
# Training routine
# see Adaptor's homepage for details:
# https://github.com/gaussalgo/adaptor
# Base model
from adaptor.lang_module import LangModule
lang_module = LangModule("google/t5-large-lm-adapt")
from adaptor.evaluators.generative import ROUGE, BLEU
# Evaluations
evaluators = [BLEU(), ROUGE(decides_convergence=True)]
# Objectives
from adaptor.objectives.seq2seq import Sequence2Sequence
seq_qa = Sequence2Sequence(lang_module,
texts_or_path=input_texts,
labels_or_path=labels,
val_texts_or_path=input_val_texts,
val_labels_or_path=val_labels,
batch_size=4,
val_evaluators=evaluators,
objective_id="Canard")
seq_additional_qa = Sequence2Sequence(lang_module,
texts_or_path=hotpot_inputs,
labels_or_path=hotpot_answers,
val_texts_or_path=hotpot_val_inputs[:200],
val_labels_or_path=hotpot_val["answer"][:200],
batch_size=4,
val_evaluators=evaluators,
objective_id="HotpotQA",
share_other_objective_head=seq_qa)
seq_rephrasing = Sequence2Sequence(lang_module,
texts_or_path=rephrasing_inputs,
labels_or_path=rephrasing_labels,
val_texts_or_path=rephrasing_val_inputs[:200],
val_labels_or_path=rephrasing_val_labels[:200],
batch_size=4,
val_evaluators=evaluators,
objective_id="rephrasing",
share_other_objective_head=seq_qa)
# Training schedule & arguments
from adaptor.utils import AdaptationArguments, StoppingStrategy
training_arguments = AdaptationArguments(output_dir="checkpoints-chatbot",
learning_rate=5e-5,
stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED,
stopping_patience=8,
save_total_limit=8,
do_train=True,
do_eval=True,
bf16=True,
warmup_steps=1000,
gradient_accumulation_steps=8,
logging_steps=10,
eval_steps=200,
save_steps=1000,
num_train_epochs=10,
evaluation_strategy="steps")
from adaptor.schedules import ParallelSchedule
from adaptor.adapter import Adapter
schedule = ParallelSchedule(objectives=[seq_qa, seq_additional_qa, seq_rephrasing],
args=training_arguments)
adapter = Adapter(lang_module, schedule, args=training_arguments)
adapter.train() # Training for 63k updates
```
## Usage
See the prompting templates used in training to infer the optimal prompting format.
#### Contact
Feel free to ask questions here, or at stefanik{at} gaussalgo.com
|
proximasanfinetuning/luna-diffusion
|
proximasanfinetuning
| 2023-05-11T21:09:13Z | 221 | 45 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"painterly",
"painting",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-06T17:43:13Z |
---
license: other
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- painterly
- painting
- diffusers
inference: false
---
[<img src="https://huggingface.co/proximasanfinetuning/luna-diffusion/resolve/main/cover%232.jpg">](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/cover%232.jpg)
# → about
- this was finetuned on a few hundred & mostly hand-captioned highres images on SD 1.5 for ethereal, painterly vibes
- no trigger words/ tokens, but you *can* add "painting" to the prompt to increase the painterly effect
- use "illustration" in prompts to get more vector art looking images
- works best at 768x768 px, 512x768 px or 768x512 px since it was finetuned on 768x768, so 512x512 will look overbaked
- DPM++ 2M looks usually nice and crisp, use Euler_a for a more softer look
- i recommend adding “nude, naked” to your negative prompt if you don’t like boobas because this model certainly does (¬‿¬ )
- check my [blog entry](https://proximacentaurib.xyz/checkpoints/luna-diffusion/) for more examples, comparisons and tips on settings
---
[<img src="https://colab.research.google.com/assets/colab-badge.svg">](https://colab.research.google.com/drive/1ML9E3963yyMlyspmZUXcbXqPXB1g7j5x?usp=sharing)
# 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "proximasanfinetuning/luna-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "painting of a beautiful woman with red hair, 8k, high quality"
image = pipe(prompt, height=768, width=768).images[0]
image.save("./result.jpg")
```
# you can also get it as [CKPT](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/luna_diffusion_2-2.ckpt) or [Safetensors](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/luna_diffusion_2-2.safetensors)
----
# → some great images users on [stablecog.com](https://stablecog.com) made with it:
[<img src="https://huggingface.co/proximasanfinetuning/luna-diffusion/resolve/main/stablecog-samples.png">](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/stablecog-samples.png)
Links: [1](https://stablecog.com/gallery?output=b1be8a4b-5d56-4443-beef-e4468ba7f800) [2](https://stablecog.com/gallery?output=8e9eb1cc-5e18-4650-b15f-c6912c421c9c) [3](https://stablecog.com/gallery?output=9a291259-471a-4a32-b565-eac352141480)
[4](https://stablecog.com/gallery?output=3431ade8-2c21-438b-b4c8-d9c8b129014c) [5](https://stablecog.com/gallery?output=5cd5330e-eeb3-4db3-9cd4-0fc06fad038e) [6](https://stablecog.com/gallery?output=40748472-5e85-4e33-86a7-5fcb6edd9506)
[7](https://stablecog.com/gallery?output=6559d772-bdde-431b-97f5-1de26b780ad4) [8](https://stablecog.com/gallery?output=2414bacc-8025-4e09-b91a-9826eeb34045) [9](https://stablecog.com/gallery?output=28af6f65-d5d3-4fd2-ac89-aa77778999d9)
or check the [hashtag on twitter](https://twitter.com/search?q=%23lunadiffusion&src=typed_query&f=live)
----
# → finetuned to work well with specifying various skintones
[<img src="https://huggingface.co/proximasanfinetuning/luna-diffusion/resolve/main/%2314.jpg">](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/%2314.jpg)
[<img src="https://huggingface.co/proximasanfinetuning/luna-diffusion/resolve/main/%2315.jpg">](https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/%2315.jpg)
----
if you enjoy this consider buying me a coffee or becoming a monthly supporter
(ノ◕ヮ◕)ノ*:・゚✧
<a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
----
# license
This model is licensed under a modified CreativeML OpenRAIL-M license.
* Utilizing and hosting the Luna Diffusion model and its derivatives on platforms that earn, will earn, or plan to earn revenue or donations requires prior authorization. **To request permission, please email proximasan@protonmail.com.**
* You are permitted to host the model card and files on both commercial and non-commercial websites, apps, etc. as long as you properly credit the model by stating its full name and providing a link to the model card (https://huggingface.co/proximasanfinetuning/luna-diffusion), without performing any actual inference or finetuning.
* The Luna Diffusion model and its derivatives can be hosted on non-commercial websites, apps, etc. as long as no revenue or donations are received. Proper credit must be given by stating the full model name and including a link to the model card (https://huggingface.co/proximasanfinetuning/luna-diffusion).
* **The outputs of the model or its derivatives can be used for commercial purposes as long as the usage is limited to teams of 10 or fewer individuals.**
* You can't use the model to deliberately produce nor share illegal or harmful outputs or content
* The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
* You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/proximasanfinetuning/luna-diffusion/blob/main/luna_diffusion_license.txt
|
ni-eminen/quiz_specific-spam_filter
|
ni-eminen
| 2023-05-11T21:07:56Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T09:10:50Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ni-eminen/quiz_specific-spam_filter
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ni-eminen/quiz_specific-spam_filter
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4113
- Validation Loss: 0.4286
- Train Accuracy: 0.8019
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2895, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4617 | 0.4377 | 0.8042 | 0 |
| 0.4113 | 0.4286 | 0.8019 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Actuary/poca-SoccerTwos
|
Actuary
| 2023-05-11T21:04:01Z | 39 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-05-11T20:27:49Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: Actuary/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Neerajvibez/ppo-LunarLander-v2
|
Neerajvibez
| 2023-05-11T21:03:00Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-10T15:55:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.57 +/- 27.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xpariz10/ast-finetuned-audioset-10-10-0.4593_ft_env_aug_0-2
|
xpariz10
| 2023-05-11T20:41:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-11T20:03:31Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ast-finetuned-audioset-10-10-0.4593_ft_env_aug_0-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593_ft_env_aug_0-2
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6899
- Accuracy: 0.9643
- Precision: 0.9694
- Recall: 0.9643
- F1: 0.9631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.0165 | 1.0 | 28 | 1.6252 | 0.4643 | 0.5373 | 0.4643 | 0.4711 |
| 1.3702 | 2.0 | 56 | 1.0553 | 0.8571 | 0.8929 | 0.8571 | 0.8536 |
| 0.8861 | 3.0 | 84 | 0.6899 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.5655 | 4.0 | 112 | 0.4766 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.4232 | 5.0 | 140 | 0.3403 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.3148 | 6.0 | 168 | 0.2679 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.2335 | 7.0 | 196 | 0.2239 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.176 | 8.0 | 224 | 0.1979 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.1624 | 9.0 | 252 | 0.1824 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
| 0.1466 | 10.0 | 280 | 0.1781 | 0.9643 | 0.9694 | 0.9643 | 0.9631 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
Schwarzschild009/ppo-Huggy
|
Schwarzschild009
| 2023-05-11T20:31:55Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T20:31:49Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Schwarzschild009/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
FredS1000/ppo-LunarLander-v2
|
FredS1000
| 2023-05-11T20:21:08Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T20:20:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.28 +/- 16.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shivansh-ka/Multilingual-Toxic-Comment-Roberta
|
shivansh-ka
| 2023-05-11T20:19:05Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-05-11T20:16:57Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | 1e-06 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 1.9999999494757503e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
Innokentiy/FlowerNet
|
Innokentiy
| 2023-05-11T20:08:07Z | 0 | 1 | null |
[
"license:gpl-2.0",
"region:us"
] | null | 2023-05-11T19:04:42Z |
---
license: gpl-2.0
---
[](https://huggingface.co/Innokentiy)
# FlowerNet
## Нейросеть для многоклассовой классификации цветов.

## Введение
Цель данной работы заключается в разработке нейронной сети для многоклассовой классификации, обладающей **высокой устойчивостью** к переобучению.
## Набор данных (Dataset)
Для решения задачи многоклассовой классификации цветов, я использовал набор данных tf_flowers из tensorflow.
Набор имеет 5 классов цветов: 'Одуванчик', 'Ромашка', 'Тюльпаны', 'Подсолнухи' и 'Розы'. Поэтому на конечном слое Dense 5 нейронов. Теперь про выборки. Я разбил набор данных на три выборки: от 0 до 80% - тренировочная, от 80% до 90% - проверочная(валидационная) и от 90% до 100% - тестовая.
## Архитектура сети
К качестве архитектуры я использовал xception. Схема архитектуры получилась большая, поэтому я решил не вставлять ей сюда, а загрузить в файлы проекта.
Нейронная сеть предназначена для работы на тензорных процессорах (TPU), это позволяет повысить количество эпох и мощность.
## Оптимизатор и функция потерь

Моей целью было создать крепкую нейронную сеть, которая обладала бы высокой устойчивостью к переобучению.
И тут начинается настройка.
Если использовать оптимизатор Adam, который я использовал ранее, то точность будет 90%, но при этом будет переобучение. Поэтому я решил зайти с другого бока, и использовать оптимизатор Adagrad(Adaptive Gradient) - его точность на 10 эпохе была 40%, но чем больше эпох, тем лучше его точность, и при этом точность проверочной выборки будет всегда выше чем тренировочной, и переобучения не будет. В качестве функции потерь я использую SparseCategoricalCrossentropy, так как именно её нужно использовать на TPU моделях. Так как модель моя модель использует тензорный процессор и быстро проходит эпохи, я решил увеличить количество эпох до тысячи. Adagrad начал с 40%, постепенно его точность увеличивалась, и в конечном итоге я получил точность 89.65% на проверочных данных и 0.87% на тестовых. При этом на графике можно увидеть, что модель не подвергается переобучению.
## Результат

Задача выполнена. Я создал модель которая имеет устойчивую защиту от переобучения и хорошую точность 87%.
В файлах проекта модель называется FlowerNet.h5
Страница на github: https://github.com/laf3r/FlowerNet
>Программа предоставляется в виде открытого исходного кода.
|
DunnBC22/distilbert-base-uncased-Financial_Sentiment_Analysis
|
DunnBC22
| 2023-05-11T19:50:05Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-25T05:44:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy, F1 Score
model-index:
- name: distilbert-base-uncased-Financial_Sentiment_Analysis
results: []
---
# distilbert-base-uncased-Finanacial_Sentiment_Analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3079
- Accuracy: 0.8529
- F1 Score: 0.8564
## Model description
This project classifies input samples as one of the following: negative, neutral, or positive.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Financial%20Sentiment%20Analysis/Financial%20Sentiment%20Analysis-Updated%20Version.ipynb
## Intended uses & limitations
More information needed
## Training and evaluation data
There were two datasets that I concatenated:
- https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis
- https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-for-financial-news
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.5569 | 1.0 | 134 | 0.3954 | 0.7591 | 0.7559 |
| 0.3177 | 2.0 | 268 | 0.3391 | 0.8135 | 0.8151 |
| 0.2479 | 3.0 | 402 | 0.3211 | 0.8322 | 0.8353 |
| 0.2049 | 4.0 | 536 | 0.3066 | 0.8463 | 0.8506 |
| 0.1802 | 5.0 | 670 | 0.3079 | 0.8529 | 0.8564 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
### Similar Models
You can find two models similar to this one that I completed at these links:
- https://huggingface.co/DunnBC22/fnet-large-Financial_Sentiment_Analysis_v3
- https://huggingface.co/DunnBC22/fnet-base-Financial_Sentiment_Analysis
|
lgrobol/xlm-r-base_bzg
|
lgrobol
| 2023-05-11T19:45:42Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"br",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-11T19:29:52Z |
---
license: mit
language:
- br
pipeline_tag: fill-mask
---
|
research-dump/bert_base_temp_classifier
|
research-dump
| 2023-05-11T19:33:27Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T19:17:11Z |
BERT based model trained on Definition classification data
|
Schwarzschild009/rl_course_vizdoom_health_gathering_supreme
|
Schwarzschild009
| 2023-05-11T19:31:15Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T18:39:33Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.47 +/- 5.06
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Schwarzschild009/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mpheng/Reinforce-pixelcopter-2
|
mpheng
| 2023-05-11T19:22:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T19:22:39Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.00 +/- 33.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AlexWortega/EVILdolly
|
AlexWortega
| 2023-05-11T19:13:13Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:AlexWortega/EVILdolly",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-08T10:30:16Z |
---
license: cc
datasets:
- AlexWortega/EVILdolly
language:
- en
pipeline_tag: text-generation
---
Summary
EVILDolly is an open source model of instruction-following records with wrong answers derived from databricks-dolly-15k.
The dataset includes answers that are wrong, but appear to be correct and reasonable. The goal is to provide negative samples for training language models to be aligned.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License.
|
afos950/ppo-Huggy
|
afos950
| 2023-05-11T18:37:17Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T18:37:10Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: afos950/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
WALIDALI/libyajarclo
|
WALIDALI
| 2023-05-11T18:32:23Z | 35 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-11T18:22:47Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### libyajarclo Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Adoley/covid-tweets-sentiment-analysis
|
Adoley
| 2023-05-11T18:30:13Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T08:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: covid-tweets-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweets-sentiment-analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6091
- Rmse: 0.6632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7648 | 2.0 | 500 | 0.6091 | 0.6632 |
| 0.4033 | 4.0 | 1000 | 0.7708 | 0.6632 |
| 0.1444 | 6.0 | 1500 | 1.0443 | 0.6563 |
| 0.0625 | 8.0 | 2000 | 1.3089 | 0.6628 |
| 0.0324 | 10.0 | 2500 | 1.3869 | 0.6673 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AliCampbellKhaya/a2c-PandaReachDense-v2
|
AliCampbellKhaya
| 2023-05-11T18:28:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T18:26:13Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.71 +/- 0.26
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Casa12/holi
|
Casa12
| 2023-05-11T18:11:44Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-11T18:11:44Z |
---
license: bigscience-openrail-m
---
|
katanaml-org/invoices-donut-model-v1
|
katanaml-org
| 2023-05-11T17:57:22Z | 315 | 38 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"en",
"dataset:katanaml-org/invoices-donut-data-v1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-03-13T20:51:57Z |
---
license: mit
language:
- en
pipeline_tag: image-to-text
datasets:
- katanaml-org/invoices-donut-data-v1
---
## Sparrow - Data extraction from documents with ML
This model is finetuned Donut ML base model on invoices data. Model aims to verify how well Donut performs on enterprise docs.
Mean accuracy on test set: 0.96
Inference:

Training loss:

Sparrow on [GitHub](https://github.com/katanaml/sparrow)
Sample invoice [docs](https://github.com/katanaml/sparrow/tree/main/sparrow-ui/docs/images) to use for inference (docs up to 500 were used for fine-tuning, use docs from 500 for inference)
Our website [KatanaML](https://www.katanaml.io)
On [Twitter](https://twitter.com/katana_ml)
|
Casa122/Name
|
Casa122
| 2023-05-11T17:41:37Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-05-11T17:41:37Z |
---
license: bigscience-bloom-rail-1.0
---
|
elshehawy/dqn-SpaceInvadersNoFrameskip-v4
|
elshehawy
| 2023-05-11T17:39:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T09:58:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 605.00 +/- 252.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elshehawy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elshehawy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga elshehawy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('replay_buffer_kwargs', {'handle_timeout_termination': False}),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
emozilla/scifi-fantasy-author-7b-8k_delta
|
emozilla
| 2023-05-11T17:29:52Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-11T02:46:07Z |
---
license: apache-2.0
inference: false
---
**NOTE: This "delta model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights.
See https://github.com/lm-sys/FastChat#vicuna-weights for instructions.
<br>
<br>
# scifi-fantasy-author Model Card
`scifi-fantasy-author` is a finetuned LLaMA-7B model to generate narrative fiction,
paricularly in the Science Fiction and Fantasy genres.
The following hyperparameters were used
|Batch Size|Epochs|Context length|Learning rate|Scheduler|Weight decay|Warmup ratio|
|---------:|-----:|-------------:|------------:|--------:|-----------:|-----------:|
| 128 | 3 | 8192 | 2e-5 | Cosine | 0. | 0.03 |
The model reached a training loss of 2.008 and took approximately 8 hours on 8x A100 80 GB GPUs.
The specific training script can be found [here](https://github.com/hooloovoo-ai/cyoa-backend/blob/master/backend/scripts/train.py).
|
Najia/t5-base-finetuned-urdu
|
Najia
| 2023-05-11T17:09:23Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-11T16:02:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Najia/t5-base-finetuned-urdu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Najia/t5-base-finetuned-urdu
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0701
- Validation Loss: 0.0538
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1270 | 0.1669 | 0 |
| 0.0896 | 0.0654 | 1 |
| 0.0829 | 0.0598 | 2 |
| 0.0781 | 0.0511 | 3 |
| 0.0749 | 0.0534 | 4 |
| 0.0733 | 0.0533 | 5 |
| 0.0711 | 0.0515 | 6 |
| 0.0701 | 0.0538 | 7 |
### Framework versions
- Transformers 4.29.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bennet1996/donut-base-sroie7
|
Bennet1996
| 2023-05-11T16:41:25Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-05-11T14:38:51Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie7
This model is a fine-tuned version of [Bennet1996/donut-base-sroie6](https://huggingface.co/Bennet1996/donut-base-sroie6) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kujaomega/a2c-PandaReachDense-v2
|
kujaomega
| 2023-05-11T16:37:37Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T16:35:15Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.25 +/- 0.42
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kornwtp/ConGen-WangchanBERT-Small
|
kornwtp
| 2023-05-11T16:31:26Z | 860 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-11T16:15:07Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kornwtp/ConGen-WangchanBERT-Small
This is a [ConGen](https://github.com/KornWtp/ConGen) model: It maps sentences to a 128 dimensional dense vector space and can be used for tasks like semantic search.
## Usage
Using this model becomes easy when you have [ConGen](https://github.com/KornWtp/ConGen) installed:
```
pip install -U git+https://github.com/KornWtp/ConGen.git
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('kornwtp/ConGen-WangchanBERT-Small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Thai Sentence Embeddings Benchmark*: [Semantic Textual Similarity](https://github.com/KornWtp/ConGen#thai-semantic-textual-similarity-benchmark)
## Citing & Authors
```bibtex
@inproceedings{limkonchotiwat-etal-2022-congen,
title = "{ConGen}: Unsupervised Control and Generalization Distillation For Sentence Representation",
author = "Limkonchotiwat, Peerat and
Ponwitayarat, Wuttikorn and
Lowphansirikul, Lalita and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics",
}
```
|
parthvi/setfit-hs-model-2ep
|
parthvi
| 2023-05-11T16:29:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-11T16:29:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# setfit/setfit-hs-model-2ep
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("setfit/setfit-hs-model-2ep")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
zawyar/t5-base-finetuned-urdu
|
zawyar
| 2023-05-11T16:25:47Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-11T15:43:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: zawyar/t5-base-finetuned-urdu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zawyar/t5-base-finetuned-urdu
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0778
- Validation Loss: 0.0562
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1262 | 0.0646 | 0 |
| 0.0897 | 0.1241 | 1 |
| 0.0828 | 0.0534 | 2 |
| 0.0778 | 0.0562 | 3 |
### Framework versions
- Transformers 4.29.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mauregato/qqq-finetuned-on-calls
|
Mauregato
| 2023-05-11T15:59:13Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-11T15:53:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: qqq-finetuned-on-calls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqq-finetuned-on-calls
This model is a fine-tuned version of [bragovo/qqq](https://huggingface.co/bragovo/qqq) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
- Rouge-1: 1.0
- Rouge-2: 1.0
- Rouge-l: 1.0
- Gen Len: 11.0
- Avg Rouge F: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Avg Rouge F |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-----------:|
| 2.1855 | 3.12 | 25 | 1.4282 | 0.0 | 0.0 | 0.0 | 15.0 | 0.0 |
| 1.5665 | 6.25 | 50 | 0.6420 | 0.1818 | 0.0 | 0.1818 | 12.0 | 0.1212 |
| 1.1046 | 9.38 | 75 | 0.2184 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.8218 | 12.5 | 100 | 0.1098 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.606 | 15.62 | 125 | 0.0749 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.5488 | 18.75 | 150 | 0.0577 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.4161 | 21.88 | 175 | 0.0684 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.3196 | 25.0 | 200 | 0.0570 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.2929 | 28.12 | 225 | 0.0416 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.2519 | 31.25 | 250 | 0.0247 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.178 | 34.38 | 275 | 0.0118 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1603 | 37.5 | 300 | 0.0064 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1684 | 40.62 | 325 | 0.0051 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1326 | 43.75 | 350 | 0.0051 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1349 | 46.88 | 375 | 0.0064 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1105 | 50.0 | 400 | 0.0061 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.1026 | 53.12 | 425 | 0.0049 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0936 | 56.25 | 450 | 0.0030 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0704 | 59.38 | 475 | 0.0025 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0699 | 62.5 | 500 | 0.0021 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0863 | 65.62 | 525 | 0.0020 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0595 | 68.75 | 550 | 0.0024 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0594 | 71.88 | 575 | 0.0028 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.0683 | 75.0 | 600 | 0.0026 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
| 0.074 | 78.12 | 625 | 0.0025 | 1.0 | 1.0 | 1.0 | 11.0 | 1.0 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kujaomega/a2c-AntBulletEnv-v0
|
kujaomega
| 2023-05-11T15:56:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T15:55:58Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1483.07 +/- 260.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cynthiaiii4/Text_classification_model_bbu_RF
|
Cynthiaiii4
| 2023-05-11T15:49:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T02:15:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_RF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_RF
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4642
- Accuracy: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.4967 | 0.7575 |
| No log | 2.0 | 200 | 0.4642 | 0.7775 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ulelmelingkel/pipit
|
ulelmelingkel
| 2023-05-11T15:47:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T15:43:03Z |
---
license: creativeml-openrail-m
---
|
Neronuser/dqn-SpaceInvadersNoFrameskip-no-r
|
Neronuser
| 2023-05-11T15:46:38Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T15:45:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 821.00 +/- 300.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neronuser -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neronuser -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Neronuser
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
muhammadravi251001/fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-11T15:40:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-03T20:00:25Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2784
- Exact Match: 53.4392
- F1: 68.7244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.1764 | 0.5 | 19 | 3.7674 | 10.4056 | 23.6332 |
| 6.1764 | 1.0 | 38 | 2.7985 | 19.5767 | 32.6228 |
| 3.8085 | 1.49 | 57 | 2.4169 | 22.0459 | 35.4084 |
| 3.8085 | 1.99 | 76 | 2.2811 | 25.9259 | 38.3963 |
| 3.8085 | 2.49 | 95 | 2.1607 | 28.0423 | 40.3901 |
| 2.3932 | 2.99 | 114 | 2.0488 | 31.0406 | 43.7059 |
| 2.3932 | 3.49 | 133 | 1.9787 | 34.3915 | 46.3655 |
| 2.0772 | 3.98 | 152 | 1.8661 | 37.2134 | 49.1483 |
| 2.0772 | 4.48 | 171 | 1.7893 | 40.2116 | 52.4989 |
| 2.0772 | 4.98 | 190 | 1.7014 | 41.9753 | 54.9197 |
| 1.7645 | 5.48 | 209 | 1.5940 | 44.2681 | 58.2134 |
| 1.7645 | 5.98 | 228 | 1.4972 | 46.2081 | 60.4997 |
| 1.7645 | 6.47 | 247 | 1.4214 | 48.8536 | 63.4371 |
| 1.5035 | 6.97 | 266 | 1.3676 | 50.6173 | 65.4663 |
| 1.5035 | 7.47 | 285 | 1.3357 | 52.2046 | 67.1759 |
| 1.3206 | 7.97 | 304 | 1.3149 | 53.0864 | 68.0698 |
| 1.3206 | 8.47 | 323 | 1.2988 | 53.4392 | 68.3971 |
| 1.3206 | 8.96 | 342 | 1.2894 | 53.6155 | 68.8897 |
| 1.2472 | 9.46 | 361 | 1.2820 | 53.4392 | 68.5835 |
| 1.2472 | 9.96 | 380 | 1.2784 | 53.4392 | 68.7244 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
SakuraFoxKira/BY_RF-7
|
SakuraFoxKira
| 2023-05-11T15:32:31Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T15:28:50Z |
---
license: creativeml-openrail-m
---
|
irow/a2c-AntBulletEnv-v0
|
irow
| 2023-05-11T15:20:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T15:20:13Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1867.02 +/- 96.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
parthvi/setfit-hs-model
|
parthvi
| 2023-05-11T15:07:13Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-11T15:07:05Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# setfit/setfit-hs-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("setfit/setfit-hs-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
zari12/helli
|
zari12
| 2023-05-11T14:56:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T14:55:30Z |
---
license: creativeml-openrail-m
---
|
muhammadravi251001/fine-tuned-DatasetQAS-IDK-MRC-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-11T14:49:30Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-05T21:35:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-IDK-MRC-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-IDK-MRC-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0883
- Exact Match: 65.4450
- F1: 70.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.2828 | 0.49 | 36 | 2.6576 | 49.7382 | 49.7756 |
| 3.794 | 0.98 | 72 | 1.9936 | 49.8691 | 49.8691 |
| 2.2086 | 1.47 | 108 | 1.8469 | 49.2147 | 49.5992 |
| 2.2086 | 1.96 | 144 | 1.7445 | 50.5236 | 51.9107 |
| 2.0123 | 2.46 | 180 | 1.6178 | 49.8691 | 54.4031 |
| 1.7802 | 2.95 | 216 | 1.4800 | 54.8429 | 58.8765 |
| 1.5945 | 3.44 | 252 | 1.3337 | 57.5916 | 62.8748 |
| 1.5945 | 3.93 | 288 | 1.3153 | 58.2461 | 63.4667 |
| 1.4083 | 4.42 | 324 | 1.2184 | 59.8168 | 65.4478 |
| 1.2513 | 4.91 | 360 | 1.2348 | 58.3770 | 64.1649 |
| 1.2513 | 5.4 | 396 | 1.1415 | 62.6963 | 68.0081 |
| 1.161 | 5.89 | 432 | 1.1463 | 62.6963 | 67.6633 |
| 1.0755 | 6.38 | 468 | 1.1126 | 63.4817 | 68.7554 |
| 1.0099 | 6.87 | 504 | 1.0823 | 63.4817 | 68.9182 |
| 1.0099 | 7.37 | 540 | 1.0547 | 66.2304 | 71.2423 |
| 0.9815 | 7.86 | 576 | 1.0835 | 63.4817 | 69.1031 |
| 0.9464 | 8.35 | 612 | 1.0644 | 66.3613 | 71.4374 |
| 0.9464 | 8.84 | 648 | 1.0642 | 65.9686 | 71.2813 |
| 0.9325 | 9.33 | 684 | 1.0786 | 65.4450 | 70.8541 |
| 0.913 | 9.82 | 720 | 1.0883 | 65.4450 | 70.8022 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
theSLWayne/Muwa-1.3b
|
theSLWayne
| 2023-05-11T14:23:58Z | 0 | 0 | null |
[
"text-generation-inference",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2106.09685",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-05-04T11:45:14Z |
---
license: cc-by-nc-4.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
tags:
- text-generation-inference
---
# Muwa-OPT - A budget-friendly OPT-based LLM
[Muwa Repository on GitHub](https://github.com/theSLWayne/Muwa-OPT/)

Muwa is a fine-tuned LoRA model based on Facebook's OPT model architecture. Muwa was fine-tuned using the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), which is a dataset of instruction-following records that belong to multiple categories like brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. **The specialty of Muwa is that only free resources have been used to fine-tune the model**, no fancy arrays of GPUs or paid GPU processors were not used for fine-tuning the model; only the free-tier of Google Colaboratory.
Muwa is currently trained using the [OPT 1.3b model](https://huggingface.co/facebook/opt-1.3b), which is available in HuggingFace.
This work is heavily inspired from [Yudhanjaya's Eluwa model](https://github.com/yudhanjaya/Eluwa). Most of the model fine-tuning and benchmarking code is taken from their repository and I made some adjustments to the code and changed some parameters to make sure that the fine-tuning process can be done on free resources that were available to me at the time.
## Inference
Make sure you install the following Python packages in the environment where the model is intended to be run.
```shell
pip install torch peft datasets evaluate transformers accelerate bitsandbytes
```
First, OPT 1.3b model should be loaded and then Muwa should be loaded from their respective HuggingFace repositories. After the models are loaded, they can be used for inference.
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Define model names to be loaded
peft_model_id = 'theSLWayne/Muwa-1.3b'
base_model = 'facebook/opt-1.3b'
# Load base model
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map='auto',
torch_dtype=torch.float16,
)
# Load Muwa
model = PeftModel.from_pretrained(
model,
peft_model_id,
device_map='auto',
torch_dtype=torch.float16,
)
# Initiate tokenizer of the base model
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Create batches of inputs
batch = tokenizer("What is a deep learning model?", return_tensors='pt')
# Take predictions
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print(tokenizer.decode(output_tokens[0], skip_special_tokens=True))
```
If you intend to use CPU (which is not recommended), you can load the models as follows:
```python
model = AutoModelForCausalLM.from_pretrained(
base_model, device_map='auto', low_cpu_mem_usage=True
)
model = PeftModel.from_pretrained(
model,
peft_model_id,
device_map='auto',
)
```
## Training Muwa
This model was fine-tuned for 2 Epochs using the aforementioned Databricks Dolly 15K dataset. This model and its base model (OPT 1.3b) can be loaded in 8-bit. The notebook that was used for training this model can be found on the [GitHub repo](https://github.com/theSLWayne/Muwa-OPT/), including my notes on each code block.
The model was trained only using T4 GPU provided by Google Colab. **In order to fit the whole model and the dataset into it, the dataset had an input limit of 1024 tokens per each query**. **This was done because with the default value, the GPU RAM was not enough to fine-tune the model**.
With the limit in input tokens, the model training took ~12 GB of GPU RAM.
### PEFT and LoRA
PEFT(Parameter-Efficient Fine-tuning) is a set of approaches that are meant to reduce the cost of fine-tuning, storing, and deploying large models. According to [this HuggingFace article on PEFT](https://huggingface.co/blog/peft),
*`PEFT approaches only fine-tune a small number of (extra) model parameters while freezing most parameters of the pretrained LLMs, thereby greatly decreasing the computational and storage costs. This also overcomes the issues of catastrophic forgetting, a behaviour observed during the full finetuning of LLMs. PEFT approaches have also shown to be better than fine-tuning in the low-data regimes and generalize better to out-of-domain scenarios. It can be applied to various modalities, e.g., image classification and stable diffusion dreambooth.`*
HuggingFace has launched a Python package with the same name and according to the documentation it implements a number of PEFT methods:
1. LoRA
2. Prefix Tuning
3. P-Tuning
4. Prompt Tuning
5. AdaLoRA
This package is used in fine-tuning and in the inference of Muwa. More details about this package can be discovered [here](https://github.com/huggingface/peft).
LoRA (Low-Rank Adaptation) is a method proposed for adapting large pre-trained language models to specific tasks or domains. It involves freezing the pre-trained model weights and adding trainable rank decomposition matrices to each layer of the Transformer architecture, which significantly reduces the number of trainable parameters for downstream tasks. This approach allows for efficient adaptation of language models with fewer trainable parameters and reduced GPU memory requirements. More information on LoRA can be found on the paper that introduced the method, which can be accessed [here](https://arxiv.org/abs/2106.09685). Also, I found [this video](https://www.youtube.com/watch?v=_K3HgjnRHCY&lc=Ugyqpr8yVUW2DHlvsoZ4AaABAg) that explains the paper in simple terms, which I found to be very useful.
## Testing and Evaluating
Muwa was tested and evaluated using SQuAD mini, wikitext, and piqa datasets. Both Muwa and its base model, OPT 1.3b were evaluated seperately using all mentioned datasets and the results can be summarized as follows:
| Dataset | OPT 1.3b | Muwa |
|---------|----------|------|
| SQuAD Mini (*avg. f1 score*) | 24.587 | **26.234** |
| wikitext (*perplexity*) | 13.91406 | **13.96875** |
| piqa (*accuracy*) | 0.495 | **0.532** |
As shown, Muwa has been able to outperform its base model by fine tuning using a rather smaller dataset (compared to others like [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) available for these tasks) for all the evaluation datasets.
This shows that LLMs that have Billions of parameters can be fine-tuned using resources which are available for free and you can actually improve the model's performance by doing so.
Code used for evaluating Muwa can be found in the notebook which is included in the [GitHub repo](https://github.com/theSLWayne/Muwa-OPT/).
## The Story Behind Muwa
As mentioned above, Muwa was heavily inspired by Eluwa model developed by Yudhanjaya et al. "Eluwa" means goat in Sinhalese. Continuing the trend of naming LLMs after even-toed ungulates, this model is named "Muwa".
Deers aren't as fearsome as Goats, or even Llamas and alpacas but they are still an impressive species. They are graceful, agile, and known for their antlers, which they shed and regrow every year. In some cultures, deers are considered a symbol of gentleness and kindness. All the more reasons to name this model after them.
About the graphic located at the beginning of this document, that is the work of someone(me) with zero knowledge and experience in design, and it shows. The initial image was taken from [freepngimg.com](https://www.freepngimg.com/png/22758-deer-head-free-download) and is protected under [Creative Commons (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license. Then that image was colorized using [Colorizer Models HuggingFace space](https://huggingface.co/spaces/trysem/Colorizer_Models). Then the text was added after loading the colorized image into [Canva](canva.com), which provided the final output.
## License
The base model used for this work, Facebook's OPT has its own license, which can be found [here](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md).
Databricks Dolly 15k model is protected under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/), allowing it to be modified, redistributed, and used for any purpose, even commercially.
Although the dataset is allowed to be modified and redistributed, the licensing of OPT does not allow to use it for any commercial or any other non-research related cases, therefore making Muwa restricted to be used only for research, under CC BY NC 4.0.
|
akmalartsai/TsumuriDGP
|
akmalartsai
| 2023-05-11T14:16:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T14:15:24Z |
---
license: creativeml-openrail-m
---
|
sujithkumar6502/Taxi-v3-qlearning
|
sujithkumar6502
| 2023-05-11T14:11:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T14:11:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sujithkumar6502/Taxi-v3-qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
law-ai/InCaseLawBERT
|
law-ai
| 2023-05-11T14:07:12Z | 402 | 18 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"legal",
"fill-mask",
"en",
"arxiv:2209.06049",
"arxiv:2112.14731",
"arxiv:1911.05405",
"arxiv:2105.13562",
"license:mit",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-11T14:45:55Z |
---
language: en
pipeline_tag: fill-mask
tags:
- legal
license: mit
---
### InCaseLawBERT
Model and tokenizer files for the InCaseLawBERT model from the paper [Pre-training Transformers on Indian Legal Text](https://arxiv.org/abs/2209.06049).
### Training Data
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language).
The raw text corpus size is around 27 GB.
### Training Setup
This model is initialized with the [Legal-BERT model](https://huggingface.co/zlucia/legalbert) from the paper [When does pretraining help?: assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings](https://dl.acm.org/doi/abs/10.1145/3462757.3466088). In our work, we refer to this model as CaseLawBERT, and our re-trained model as InCaseLawBERT.
We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.
### Model Overview
This model uses the same tokenizer as [CaseLawBERT](https://huggingface.co/zlucia/legalbert).
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters.
### Usage
Using the model to get embeddings/representations for a piece of text
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("law-ai/InCaseLawBERT")
text = "Replace this string with yours"
encoded_input = tokenizer(text, return_tensors="pt")
model = AutoModel.from_pretrained("law-ai/InCaseLawBERT")
output = model(**encoded_input)
last_hidden_state = output.last_hidden_state
```
### Fine-tuning Results
We have fine-tuned all pre-trained models on 3 legal tasks with Indian datasets:
* Legal Statute Identification ([ILSI Dataset](https://arxiv.org/abs/2112.14731))[Multi-label Text Classification]: Identifying relevant statutes (law articles) based on the facts of a court case
* Semantic Segmentation ([ISS Dataset](https://arxiv.org/abs/1911.05405))[Sentence Tagging]: Segmenting the document into 7 functional parts (semantic segments) such as Facts, Arguments, etc.
* Court Judgment Prediction ([ILDC Dataset](https://arxiv.org/abs/2105.13562))[Binary Text Classification]: Predicting whether the claims/petitions of a court case will be accepted/rejected
InCaseLawBERT performs close to CaseLawBERT across the three tasks, but not as good as [InLegalBERT](https://huggingface.co/law-ai/InLegalBERT). For details, see our [paper](https://arxiv.org/abs/2209.06049).
### Citation
```
@inproceedings{paul-2022-pretraining,
url = {https://arxiv.org/abs/2209.06049},
author = {Paul, Shounak and Mandal, Arpan and Goyal, Pawan and Ghosh, Saptarshi},
title = {Pre-trained Language Models for the Legal Domain: A Case Study on Indian Law},
booktitle = {Proceedings of 19th International Conference on Artificial Intelligence and Law - ICAIL 2023}
year = {2023},
}
```
### About Us
We are a group of researchers from the Department of Computer Science and Technology, Indian Insitute of Technology, Kharagpur.
Our research interests are primarily ML and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario.
We have, and are currently working on several legal tasks such as:
* named entity recognition, summarization of legal documents
* semantic segmentation of legal documents
* legal statute identification from facts, court judgment prediction
* legal document matching
You can find our publicly available codes and datasets [here](https://github.com/Law-AI).
|
moabdelg-org/ppo-Huggy
|
moabdelg-org
| 2023-05-11T14:05:20Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T14:05:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: moabdelg-org/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iggggor/ppo-Huggy
|
iggggor
| 2023-05-11T14:04:58Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T14:04:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: iggggor/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
leadawon/ossp-v0_3
|
leadawon
| 2023-05-11T13:55:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-15T07:08:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: ossp-v0_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ossp-v0_3
This model is a fine-tuned version of [leadawon/ossp-v0_2](https://huggingface.co/leadawon/ossp-v0_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.3999 | 0.2 | 10000 | 0.4079 |
| 0.4441 | 0.39 | 20000 | 0.4555 |
| 0.4361 | 0.59 | 30000 | 0.4378 |
| 0.4302 | 0.79 | 40000 | 0.4255 |
| 0.4392 | 0.98 | 50000 | 0.4076 |
| 0.3714 | 1.18 | 60000 | 0.4006 |
| 0.3694 | 1.38 | 70000 | 0.3908 |
| 0.3591 | 1.57 | 80000 | 0.3810 |
| 0.3594 | 1.77 | 90000 | 0.3762 |
| 0.3567 | 1.97 | 100000 | 0.3667 |
| 0.3041 | 2.16 | 110000 | 0.3663 |
| 0.299 | 2.36 | 120000 | 0.3603 |
| 0.2972 | 2.56 | 130000 | 0.3569 |
| 0.2892 | 2.75 | 140000 | 0.3519 |
| 0.2844 | 2.95 | 150000 | 0.3463 |
| 0.2372 | 3.15 | 160000 | 0.3522 |
| 0.2367 | 3.34 | 170000 | 0.3508 |
| 0.2295 | 3.54 | 180000 | 0.3489 |
| 0.2281 | 3.74 | 190000 | 0.3468 |
| 0.2233 | 3.93 | 200000 | 0.3451 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
MrPark97/distilbert-base-uncased-finetuned-emotion
|
MrPark97
| 2023-05-11T13:52:23Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T13:39:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9219181118935907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8438 | 1.0 | 250 | 0.3229 | 0.901 | 0.8975 |
| 0.2511 | 2.0 | 500 | 0.2156 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GhylB/Sentiment_Analysis_RoBERTa
|
GhylB
| 2023-05-11T13:31:14Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T12:05:17Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Sentiment_Analysis_RoBERTa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment_Analysis_RoBERTa
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5934
- Rmse: 0.6311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7173 | 2.0 | 500 | 0.5934 | 0.6311 |
| 0.4139 | 4.0 | 1000 | 0.6405 | 0.6015 |
| 0.1956 | 6.0 | 1500 | 0.8526 | 0.6122 |
| 0.0997 | 8.0 | 2000 | 1.1684 | 0.6089 |
| 0.0569 | 10.0 | 2500 | 1.2575 | 0.5986 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
farmer00317558/quantized_whisper_model
|
farmer00317558
| 2023-05-11T13:18:56Z | 0 | 1 | null |
[
"whisper.cpp",
"ggml",
"quantized_whisper_model",
"license:mit",
"region:us"
] | null | 2023-05-11T13:07:13Z |
---
license: mit
tags:
- whisper.cpp
- ggml
- quantized_whisper_model
---
Quantized whisper model of https://github.com/ggerganov/whisper.cpp
|
PaulineSanchez/Modele_traduction_HF
|
PaulineSanchez
| 2023-05-11T13:10:58Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"food",
"translation",
"en",
"fr",
"dataset:PaulineSanchez/Trad_food",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-05T07:41:40Z |
---
language:
- en
- fr
datasets:
- PaulineSanchez/Trad_food
metrics:
- bleu
tags:
- food
pipeline_tag: translation
---
# train_hf
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the PaulineSanchez/Trad_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5736
- Bleu: 77.4387
- Gen Len: 10.8386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mizuiro-sakura/luke-japanese-large-finetuned-ner
|
Mizuiro-sakura
| 2023-05-11T13:02:50Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"token-classification",
"ner",
"固有表現抽出",
"named entity recognition",
"named-entity-recognition",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-11T12:00:20Z |
---
license: mit
language: ja
tags:
- luke
- pytorch
- transformers
- ner
- 固有表現抽出
- named entity recognition
- named-entity-recognition
---
# このモデルはluke-japanese-largeをファインチューニングして、固有表現抽出(NER)に用いれるようにしたものです。
このモデルはluke-japanese-largeを
Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
固有表現抽出(NER)タスクに用いることができます。
# This model is fine-tuned model for Named-Entity-Recognition(NER) which is based on luke-japanese-large
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# モデルの精度 accuracy of model
全体:0.8453191098032002
||precision|recall|f1-score|support|
|-------------|-----|-----|-----|-----|
|その他の組織名|0.78|0.79|0.79|238|
|イベント名|0.83|0.88| 0.85 | 215|
|人名 | 0.88 | 0.89 | 0.89 | 546|
|地名 |0.83 | 0.85 | 0.84 | 440|
|政治的組織名 | 0.80 | 0.84 | 0.82 | 263|
|施設名 | 0.79 | 0.84 | 0.81 | 241|
|法人名 | 0.88 | 0.89 | 0.89 | 487|
|製品名 | 0.79 | 0.80 | 0.79 | 252|
|micro avg | 0.83 | 0.86 | 0.85 | 2682|
|macro avg | 0.82 | 0.85 | 0.83 | 2682|
|weighted avg | 0.83 | 0.86 | 0.85 | 2682|
# How to use 使い方
sentencepieceとtransformersをインストールして (pip install sentencepiece , pip install transformers)
以下のコードを実行することで、NERタスクを解かせることができます。
please execute this code.
```python
from transformers import MLukeTokenizer,pipeline, LukeForTokenClassification
tokenizer = MLukeTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-ner')
model=LukeForTokenClassification.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
pradeep4321/valve_model
|
pradeep4321
| 2023-05-11T13:02:15Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-11T12:35:28Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: valve_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# valve_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4860
- Validation Loss: 6.0810
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 200, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1291 | 5.9072 | 0 |
| 3.1205 | 5.9071 | 1 |
| 3.0615 | 5.9070 | 2 |
| 3.1662 | 5.9069 | 3 |
| 3.1011 | 5.9068 | 4 |
| 3.1374 | 5.9066 | 5 |
| 3.1472 | 5.9065 | 6 |
| 3.0926 | 5.9066 | 7 |
| 3.1436 | 5.9065 | 8 |
| 3.1321 | 5.9065 | 9 |
| 3.1027 | 5.9065 | 10 |
| 2.9848 | 5.9068 | 11 |
| 2.9544 | 5.9069 | 12 |
| 3.0212 | 5.9066 | 13 |
| 3.0448 | 5.9066 | 14 |
| 3.0455 | 5.9063 | 15 |
| 3.0294 | 5.9063 | 16 |
| 2.9529 | 5.9058 | 17 |
| 2.8377 | 5.9054 | 18 |
| 2.8682 | 5.9054 | 19 |
| 2.9745 | 5.9050 | 20 |
| 2.9680 | 5.9049 | 21 |
| 2.9270 | 5.9046 | 22 |
| 2.8955 | 5.9039 | 23 |
| 2.9627 | 5.9031 | 24 |
| 2.8304 | 5.9020 | 25 |
| 2.8542 | 5.9009 | 26 |
| 2.8008 | 5.8999 | 27 |
| 2.8067 | 5.8992 | 28 |
| 2.7471 | 5.8987 | 29 |
| 2.7494 | 5.8983 | 30 |
| 2.7467 | 5.8990 | 31 |
| 2.6482 | 5.9001 | 32 |
| 2.7226 | 5.9006 | 33 |
| 2.6202 | 5.9003 | 34 |
| 2.6576 | 5.9005 | 35 |
| 2.6144 | 5.9010 | 36 |
| 2.6040 | 5.9015 | 37 |
| 2.4523 | 5.9022 | 38 |
| 2.4589 | 5.9023 | 39 |
| 2.4796 | 5.9028 | 40 |
| 2.4962 | 5.9027 | 41 |
| 2.4251 | 5.9029 | 42 |
| 2.3685 | 5.9031 | 43 |
| 2.3015 | 5.9034 | 44 |
| 2.3080 | 5.9035 | 45 |
| 2.2066 | 5.9039 | 46 |
| 2.1621 | 5.9061 | 47 |
| 2.1354 | 5.9088 | 48 |
| 2.1527 | 5.9112 | 49 |
| 2.1650 | 5.9115 | 50 |
| 2.1298 | 5.9117 | 51 |
| 2.0993 | 5.9106 | 52 |
| 2.0044 | 5.9099 | 53 |
| 1.9764 | 5.9102 | 54 |
| 1.9662 | 5.9116 | 55 |
| 1.9702 | 5.9145 | 56 |
| 1.9012 | 5.9152 | 57 |
| 1.8061 | 5.9175 | 58 |
| 1.7831 | 5.9211 | 59 |
| 1.8015 | 5.9253 | 60 |
| 1.7642 | 5.9298 | 61 |
| 1.7484 | 5.9328 | 62 |
| 1.5452 | 5.9342 | 63 |
| 1.5996 | 5.9369 | 64 |
| 1.4831 | 5.9396 | 65 |
| 1.4367 | 5.9421 | 66 |
| 1.4981 | 5.9435 | 67 |
| 1.4513 | 5.9475 | 68 |
| 1.3897 | 5.9532 | 69 |
| 1.3108 | 5.9603 | 70 |
| 1.3337 | 5.9664 | 71 |
| 1.2564 | 5.9728 | 72 |
| 1.2671 | 5.9770 | 73 |
| 1.1286 | 5.9814 | 74 |
| 1.1349 | 5.9843 | 75 |
| 1.1645 | 5.9842 | 76 |
| 1.1462 | 5.9806 | 77 |
| 1.1028 | 5.9791 | 78 |
| 0.9843 | 5.9770 | 79 |
| 0.9734 | 5.9768 | 80 |
| 0.9831 | 5.9795 | 81 |
| 1.0021 | 5.9823 | 82 |
| 0.8903 | 5.9826 | 83 |
| 0.8244 | 5.9837 | 84 |
| 0.8597 | 5.9863 | 85 |
| 0.8703 | 5.9907 | 86 |
| 0.7864 | 5.9996 | 87 |
| 0.7394 | 6.0086 | 88 |
| 0.6764 | 6.0188 | 89 |
| 0.7007 | 6.0278 | 90 |
| 0.6247 | 6.0355 | 91 |
| 0.6640 | 6.0430 | 92 |
| 0.6407 | 6.0498 | 93 |
| 0.5903 | 6.0565 | 94 |
| 0.6226 | 6.0614 | 95 |
| 0.5934 | 6.0662 | 96 |
| 0.5140 | 6.0713 | 97 |
| 0.5300 | 6.0766 | 98 |
| 0.4860 | 6.0810 | 99 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.9.1
- Datasets 2.5.1
- Tokenizers 0.13.3
|
Cynthiaiii4/Text_classification_model_bbu_12500
|
Cynthiaiii4
| 2023-05-11T12:49:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T11:22:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_12500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_12500
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9447
- Accuracy: 0.795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.348 | 1.0 | 882 | 0.4511 | 0.7925 |
| 0.1714 | 2.0 | 1764 | 0.5316 | 0.7925 |
| 0.0852 | 3.0 | 2646 | 0.8147 | 0.79 |
| 0.0529 | 4.0 | 3528 | 0.9447 | 0.795 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
neongeckocom/stt_pt_citrinet_512_gamma_0_25
|
neongeckocom
| 2023-05-11T12:27:12Z | 10 | 2 |
nemo
|
[
"nemo",
"onnx",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_12_0",
"license:bsd-3-clause",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2023-02-23T20:50:40Z |
---
language:
- pt
library_name: nemo
datasets:
- mozilla-foundation/common_voice_12_0
tags:
- automatic-speech-recognition
model-index:
- name: stt_pt_citrinet_512_gamma_0_25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 12.0
type: mozilla-foundation/common_voice_12_0
config: clean
split: test
args:
language: pt
metrics:
- name: Test WER
type: wer
value: 6.033
license: bsd-3-clause
---
# NVIDIA Streaming Citrinet 512 (pt-PT)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
## Attribution
As initial checkpoint used [stt_en_citrinet_512_gamma_0_25](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_citrinet_512_gamma_0_25) by [NVIDIA](https://github.com/NVIDIA) licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
|
samhog/psychology-alpaca-rm
|
samhog
| 2023-05-11T12:19:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-02T18:40:41Z |
## Psychology-Alpaca-RM
- PEFT adapter layers for a reward model based on ``decapoda-research/llama-7b-hf``.
- Trained with a small subset (110 data points) of ``samhog/cgpt-pairs`` with 10K prompts, each with two answers (one 'good', one 'bad')
|
ngkuissi/dqn-SpaceInvadersNoFrameskip-v4
|
ngkuissi
| 2023-05-11T12:13:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T12:12:49Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 681.00 +/- 246.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ngkuissi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ngkuissi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ngkuissi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
neongeckocom/stt_de_citrinet_512_gamma_0_25
|
neongeckocom
| 2023-05-11T12:02:45Z | 6 | 0 |
nemo
|
[
"nemo",
"onnx",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_12_0",
"license:bsd-3-clause",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-12-31T08:46:02Z |
---
language:
- de
library_name: nemo
datasets:
- mozilla-foundation/common_voice_12_0
tags:
- automatic-speech-recognition
model-index:
- name: stt_de_citrinet_512_gamma_0_25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 12.0
type: mozilla-foundation/common_voice_12_0
config: clean
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 11.10
license: bsd-3-clause
---
# NVIDIA Streaming Citrinet 512 (de-DE)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
## Attribution
As initial checkpoint used [stt_en_citrinet_512_gamma_0_25](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_citrinet_512_gamma_0_25) by [NVIDIA](https://github.com/NVIDIA) licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
|
abdullahalzubaer/bloom-6b4-clp-german-instruct-lora-peft
|
abdullahalzubaer
| 2023-05-11T11:54:44Z | 0 | 1 | null |
[
"bloom",
"lora",
"LLM",
"text-generation",
"de",
"region:us"
] |
text-generation
| 2023-05-04T23:43:24Z |
---
language:
- de
pipeline_tag: text-generation
tags:
- bloom
- lora
- LLM
---
Github: https://github.com/abdullahalzubaer/bloom-6b4-clp-german-lora-inference
Dataset used to train the adapter:
See this thread for more details https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora/discussions/2
- yizhongw/self_instruct [Translated to German]
- https://huggingface.co/datasets/yizhongw/self_instruct
This lora adapter is from https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora. Thanks for the adapter! I did not train it.
I thought I was uploading the complete bloom-6b4-clp-german model with the adapter that I made to work but then after pushing the model I realized that it was only the adapter. Still exploring how this PEFT works with LoRA works :)
strict requirments for peft
`peft==0.2.0`
requirment
`pip install transformers accelerate bitsandbytes peft==0.2.0`
latest peft has breaking changes with the bloom-6b4-clp-german and this lora adapter, and the only way to get them both work is (I think) is to train the base model or the adapter
again (I am not sure yet).
Reference:
- https://github.com/linhduongtuan/BLOOM-LORA/issues/5
- https://github.com/huggingface/peft/issues/276
|
DarwinAnim8or/GPT-Greentext-355m
|
DarwinAnim8or
| 2023-05-11T11:33:44Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"fun",
"greentext",
"en",
"dataset:DarwinAnim8or/greentext",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-29T02:47:49Z |
---
license: mit
datasets:
- DarwinAnim8or/greentext
language:
- en
tags:
- fun
- greentext
widget:
- text: ">be me"
example_title: "be me"
co2_eq_emissions:
emissions: 60
source: "https://mlco2.github.io/impact/#compute"
training_type: "fine-tuning"
geographical_location: "Oregon, USA"
hardware_used: "1 T4, Google Colab"
---
# GPT-Greentext-355m
A finetuned version of [GPT2-Medium](https://huggingface.co/gpt2-medium) on the 'greentext' dataset. (Linked above)
A demo is available [here](https://huggingface.co/spaces/DarwinAnim8or/GPT-Greentext-Playground)
The demo playground is recommended over the inference box on the right.
The largest model in this series is located here: [GPT-Greentext-1.5b](https://huggingface.co/DarwinAnim8or/GPT-Greentext-1.5b)
# Training Procedure
This was trained on the 'greentext' dataset, using the "HappyTransformers" library on Google Colab.
This model was trained for 15 epochs with learning rate 1e-2.
# Biases & Limitations
This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the greentext dataset.
It likely will generate offensive output.
# Intended Use
This model is meant for fun, nothing else.
# Sample Use
```python
#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPT2", "DarwinAnim8or/GPT-Greentext-355m")
#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettingsGENSettings(no_repeat_ngram_size=3, do_sample=True, top_k=80, temperature=0.8, max_length=150, early_stopping=False)
#Generate a response:
result = happy_gen.generate_text(""">be me
>""", args=args_top_k)
print(result)
print(result.text)
```
|
imhidayat/firstModel
|
imhidayat
| 2023-05-11T11:28:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-11T11:07:03Z |
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).
How to use the discriminator in transformers
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()]
|
JoBuettner/ppo-PyramidsRND
|
JoBuettner
| 2023-05-11T11:19:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-05-11T11:12:55Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: JoBuettner/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dawoz/ppo-SnowballTarget
|
dawoz
| 2023-05-11T11:03:01Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-11T11:02:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: dawoz/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ilhkn/sentence_classifier
|
ilhkn
| 2023-05-11T11:02:07Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-11T10:03:32Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 180,
"warmup_steps": 18,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ilhkn/sentence_classifier2
|
ilhkn
| 2023-05-11T11:00:52Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-11T11:00:39Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ilhkn/sentence_classifier2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ilhkn/sentence_classifier2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bofenghuang/vigogne-bloom-7b1-instruct
|
bofenghuang
| 2023-05-11T10:40:17Z | 0 | 4 |
transformers
|
[
"transformers",
"tensorboard",
"alpaca",
"bloom",
"LLM",
"text-generation",
"fr",
"dataset:tatsu-lab/alpaca",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
text-generation
| 2023-03-26T22:14:23Z |
---
license: bigscience-bloom-rail-1.0
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- alpaca
- bloom
- LLM
datasets:
- tatsu-lab/alpaca
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-instruct-bloom-7b1/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-instruct-bloom-7b1: A French Instruction-following BLOOM Model
Vigogne-instruct-bloom-7b1 is a [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) model fine-tuned to follow the 🇫🇷 French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Usage
This repo only contains the low-rank adapter. In order to access the complete model, you also need to load the base LLM model and tokenizer.
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_name_or_path = "bigscience/bloom-7b1"
lora_model_name_or_path = "bofenghuang/vigogne-instruct-bloom-7b1"
tokenizer = AutoTokenizer.from_pretrained(base_model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, lora_model_name_or_path)
```
You can infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
Mizuiro-sakura/deberta-v2-tiny-japanese-finetuned-QA
|
Mizuiro-sakura
| 2023-05-11T10:38:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"deberta",
"question answering",
"squad",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-11T10:34:38Z |
---
license: mit
language: ja
library_name: transformers
tags:
- pytorch
- deberta
- deberta-v2
- question-answering
- question answering
- squad
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
---
# このモデルはdeberta-v2-tiny-japaneseをファインチューニングしてQAタスクに用いれるようにしたものです。
このモデルはdeberta-v2-tiny-japaneseを運転ドメインQAデータセット(DDQA)( https://nlp.ist.i.kyoto-u.ac.jp/index.php?Driving%20domain%20QA%20datasets )を用いてファインチューニングしたものです。
Question-Answeringタスク(SQuAD)に用いることができます。
# This model is fine-tuned model for Question-Answering which is based on deberta-v2-tiny-japanese
This model is fine-tuned by using DDQA dataset.
You could use this model for Question-Answering tasks.
# How to use 使い方
transformersおよびpytorchをインストールしてください。
以下のコードを実行することで、Question-Answeringタスクを解かせることができます。 please execute this code.
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-tiny-japanese-finetuned-QA')
model=AutoModelForQuestionAnswering.from_pretrained('Mizuiro-sakura/deberta-v2-tiny-japanese-finetuned-QA') # 学習済みモデルの読み込み
text={
'context':'私の名前はEIMIです。好きな食べ物は苺です。 趣味は皆さんと会話することです。',
'question' :'好きな食べ物は何ですか'
}
input_ids=tokenizer.encode(text['question'],text['context']) # tokenizerで形態素解析しつつコードに変換する
output= model(torch.tensor([input_ids])) # 学習済みモデルを用いて解析
prediction = tokenizer.decode(input_ids[torch.argmax(output.start_logits): torch.argmax(output.end_logits)]) # 答えに該当する部分を抜き取る
print(prediction)
```
# モデルの精度 accuracy of model
Exact Match(厳密一致) :0.46698564593301434
f1 : 0.5808696453091605
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 tiny model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
|
BlueAvenir/mentioning_type_class_model
|
BlueAvenir
| 2023-05-11T10:37:23Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-11T10:37:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 228 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 228,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pqai/pqai-vectorizer-v3
|
pqai
| 2023-05-11T10:33:37Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"pqai",
"patents",
"prior-art-search",
"en",
"license:mit",
"region:us"
] | null | 2023-05-11T10:23:57Z |
---
license: mit
language:
- en
metrics:
- accuracy
library_name: sentence-transformers
tags:
- pqai
- patents
- prior-art-search
---
|
paulorvdc/sentencebert-fine-tuned-months-soy
|
paulorvdc
| 2023-05-11T10:29:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-05-11T10:02:49Z |
---
license: mit
---
# Sentence BERT fine-tuned commodities
This model is part of a collection of fine-tuned Sentence BERT models that were generated with the data of the "TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks" article.
Source code and networks are available at the following GitHub repo: https://github.com/paulorvdc/TRENCHANT
## how to cite
```
@article{doCarmo_ReisFilho_Marcacini_2023,
title={TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks},
volume={13},
url={https://sol.sbc.org.br/journals/index.php/jidm/article/view/2546},
DOI={10.5753/jidm.2022.2546},
number={6},
journal={Journal of Information and Data Management},
author={do Carmo, P. and Reis Filho, I. J. and Marcacini, R.},
year={2023},
month={Jan.}
}
```
## how to use
```
from sentence_transformers import SentenceTransformer, LoggingHandler
import numpy as np
import logging
# load model
np.set_printoptions(threshold=100)
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
model = SentenceTransformer('paulorvdc/sentencebert-fine-tuned-months-soy')
finetuned_embeddings = list(model.encode(['Brazilian Corn Acreage Losing out to Higher Priced Soybeans', 'Brazil Soybeans are 93% GMO, Corn is 82%, and Cotton is 66%']))
```
|
paulorvdc/sentencebert-fine-tuned-months-corn
|
paulorvdc
| 2023-05-11T10:28:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-05-11T09:39:44Z |
---
license: mit
---
# Sentence BERT fine-tuned commodities
This model is part of a collection of fine-tuned Sentence BERT models that were generated with the data of the "TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks" article.
Source code and networks are available at the following GitHub repo: https://github.com/paulorvdc/TRENCHANT
## how to cite
```
@article{doCarmo_ReisFilho_Marcacini_2023,
title={TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks},
volume={13},
url={https://sol.sbc.org.br/journals/index.php/jidm/article/view/2546},
DOI={10.5753/jidm.2022.2546},
number={6},
journal={Journal of Information and Data Management},
author={do Carmo, P. and Reis Filho, I. J. and Marcacini, R.},
year={2023},
month={Jan.}
}
```
## how to use
```
from sentence_transformers import SentenceTransformer, LoggingHandler
import numpy as np
import logging
# load model
np.set_printoptions(threshold=100)
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
model = SentenceTransformer('paulorvdc/sentencebert-fine-tuned-months-corn')
finetuned_embeddings = list(model.encode(['Livestock Producers in Brazil Fear Diversion of Corn to Export and Ethanol Production', 'Brazilian Farmers Undecided about Safrinha Corn Acreage']))
```
|
vsrinivas/mt5-small-finetuned-amazon-en-es
|
vsrinivas
| 2023-05-11T10:25:01Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-05-08T14:52:06Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0210
- Rouge1: 17.0885
- Rouge2: 8.4139
- Rougel: 16.6189
- Rougelsum: 16.7761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.706 | 1.0 | 1209 | 3.2245 | 16.2371 | 7.969 | 15.6425 | 15.7165 |
| 3.6602 | 2.0 | 2418 | 3.0803 | 16.6646 | 7.7573 | 16.0524 | 16.1609 |
| 3.4306 | 3.0 | 3627 | 3.0504 | 17.9626 | 9.2086 | 17.4429 | 17.5335 |
| 3.3144 | 4.0 | 4836 | 3.0394 | 17.7522 | 8.4791 | 17.2721 | 17.36 |
| 3.2345 | 5.0 | 6045 | 3.0431 | 17.3159 | 8.5519 | 17.0148 | 17.0964 |
| 3.1684 | 6.0 | 7254 | 3.0299 | 17.4355 | 8.6873 | 17.0855 | 17.2222 |
| 3.1375 | 7.0 | 8463 | 3.0238 | 16.8874 | 8.3565 | 16.4901 | 16.6159 |
| 3.112 | 8.0 | 9672 | 3.0210 | 17.0885 | 8.4139 | 16.6189 | 16.7761 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
muhammadravi251001/fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-11T10:11:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-06T16:25:24Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8673
- Exact Match: 74.0838
- F1: 81.0390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.2177 | 0.49 | 36 | 2.3043 | 45.2880 | 46.1924 |
| 3.4831 | 0.98 | 72 | 1.5333 | 51.3089 | 56.5227 |
| 1.6897 | 1.48 | 108 | 1.1604 | 60.2094 | 68.3733 |
| 1.6897 | 1.97 | 144 | 0.9852 | 65.3141 | 72.9935 |
| 1.1108 | 2.46 | 180 | 0.9487 | 65.4450 | 72.8064 |
| 0.8854 | 2.95 | 216 | 0.8634 | 68.0628 | 75.1967 |
| 0.7269 | 3.45 | 252 | 0.9271 | 69.7644 | 76.9429 |
| 0.7269 | 3.94 | 288 | 0.9044 | 69.3717 | 76.4864 |
| 0.648 | 4.44 | 324 | 0.8352 | 73.1675 | 79.8410 |
| 0.5446 | 4.92 | 360 | 0.8074 | 74.7382 | 81.2181 |
| 0.5446 | 5.42 | 396 | 0.8726 | 73.4293 | 80.5400 |
| 0.497 | 5.91 | 432 | 0.8598 | 73.6911 | 80.8239 |
| 0.4647 | 6.41 | 468 | 0.8673 | 74.0838 | 81.0390 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Bailefan/ppo-Huggy
|
Bailefan
| 2023-05-11T09:47:46Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T09:47:39Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Bailefan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
utnah/safetensors
|
utnah
| 2023-05-11T09:43:20Z | 0 | 18 | null |
[
"safetensors",
"license:openrail",
"region:us"
] | null | 2023-01-06T21:17:18Z |
---
license: openrail
---
Модели весов для StableDiffusion в формате safetensors
Для быстрой загрузки в [Google Colab](https://colab.research.google.com/drive/1TC4SSLncPWytSPvquR6Y4-U7wZRfAXrV)
[](https://colab.research.google.com/drive/1TC4SSLncPWytSPvquR6Y4-U7wZRfAXrV)
|
xqchq/test-trainer2
|
xqchq
| 2023-05-11T09:42:34Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T03:38:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test-trainer2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer2
This model is a fine-tuned version of [hfl/minirbt-h256](https://huggingface.co/hfl/minirbt-h256) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cynthiaiii4/Text_classification_model_bbc_v6
|
Cynthiaiii4
| 2023-05-11T09:40:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-11T07:51:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbc_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbc_v6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8115
- Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.5348 | 0.7625 |
| No log | 2.0 | 100 | 0.7592 | 0.76 |
| No log | 3.0 | 150 | 0.7245 | 0.775 |
| No log | 4.0 | 200 | 0.8115 | 0.77 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jasman0186/q-FrozenLake-v1-8x8-Slippery
|
jasman0186
| 2023-05-11T09:36:21Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-11T09:36:15Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.20 +/- 0.40
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jasman0186/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BlueAvenir/sti_modern_workplace_class_model_updated
|
BlueAvenir
| 2023-05-11T09:29:38Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-11T09:29:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 300,
"warmup_steps": 30,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ssdv/mountainview
|
ssdv
| 2023-05-11T09:27:25Z | 0 | 0 | null |
[
"paddlepaddle",
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-11T09:26:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: sks
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
---
# LoRA DreamBooth - ssdv/mountainview
本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 sks 文本进行了训练。 下面是在训练过程中生成的一些图片。




|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.