modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 00:47:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 00:46:57
card
stringlengths
11
1.01M
nolanaatama/mrcllphllvnrvcdclkd44
nolanaatama
2023-07-03T20:45:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-03T19:11:59Z
--- license: creativeml-openrail-m ---
Valinski/Spiderverse_trial_run
Valinski
2023-07-03T20:38:51Z
0
0
diffusers
[ "diffusers", "arxiv:1910.09700", "license:artistic-2.0", "region:us" ]
null
2023-07-03T20:11:02Z
--- license: artistic-2.0 library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
idealflaw/ppo-LunarLander-v2
idealflaw
2023-07-03T20:27:22Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T20:27:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.20 +/- 23.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LarryAIDraw/oshinoko-s1-step-60000
LarryAIDraw
2023-07-03T20:26:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-03T20:18:57Z
--- license: creativeml-openrail-m --- https://civitai.com/models/40182/oshinoko-characters-lohaloconfullckpt-oror-hoshino-ai-hoshino-aquamarine-hoshino-ruby-arima-kana-saito-miyako-kurokawa-akane-kotobuki-minami-shiranui-frill-sumi-yuki
espnet/brianyan918_mustc-v2_en-de_st_ctc_rnnt_asrinit_raw_en_de_bpe_tc4000_sp
espnet
2023-07-03T20:20:51Z
1
0
null
[ "region:us" ]
null
2023-07-03T20:18:41Z
- Download model and run inference: `./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_mustc-v2_en-de_st_ctc_rnnt_asrinit_raw_en_de_bpe_tc4000_sp --inference_config conf/tuning/decode_rnnt_tsd_mse4_scorenormduring_beam10.yaml` |dataset|score|verbose_score| |---|---|---| |decode_rnnt_tsd_mse4_scorenormduring_beam10_st_model_valid.loss.ave_10best/tst-COMMON.en-de|27.6|60.2/33.6/21.0/13.7 (BP = 0.998 ratio = 0.998 hyp_len = 51602 ref_len = 51699)|
espnet/brianyan918_mustc-v2_en-de_st_ctc_md_conformer_asrinit_v3_raw_en_de_bpe_tc4000_sp
espnet
2023-07-03T20:17:25Z
0
0
null
[ "region:us" ]
null
2023-07-03T20:15:03Z
- Download model and run inference: `./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_mustc-v2_en-de_st_ctc_md_conformer_asrinit_v3_raw_en_de_bpe_tc4000_sp --inference_config conf/tuning/decode_st_md_ctc0.3.yaml` |dataset|score|verbose_score| |---|---|---| |decode_st_md_ctc0.3_st_model_valid.acc.ave_10best/tst-COMMON.en-de|28.8|61.5/35.0/22.2/14.7 (BP = 0.994 ratio = 0.994 hyp_len = 51386 ref_len = 51699)|
alphaduriendur/ner-deBERTa-v3-large-conll2003
alphaduriendur
2023-07-03T20:07:39Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "deberta-v2", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-03T06:16:03Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: ner-deBERTa-v3-large-conll2003 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: test args: conll2003 metrics: - name: Precision type: precision value: 0.9235068110373734 - name: Recall type: recall value: 0.9362606232294618 - name: F1 type: f1 value: 0.9298399859328293 - name: Accuracy type: accuracy value: 0.9853128028426833 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner-deBERTa-v3-large-conll2003 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1546 - Precision: 0.9235 - Recall: 0.9363 - F1: 0.9298 - Accuracy: 0.9853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0077 | 1.0 | 878 | 0.1280 | 0.9096 | 0.9265 | 0.9180 | 0.9832 | | 0.0084 | 2.0 | 1756 | 0.1380 | 0.9167 | 0.9299 | 0.9233 | 0.9844 | | 0.0037 | 3.0 | 2634 | 0.1495 | 0.9221 | 0.9347 | 0.9283 | 0.9850 | | 0.0015 | 4.0 | 3512 | 0.1517 | 0.9215 | 0.9347 | 0.9280 | 0.9849 | | 0.0006 | 5.0 | 4390 | 0.1546 | 0.9235 | 0.9363 | 0.9298 | 0.9853 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
goter/onceler
goter
2023-07-03T19:54:52Z
0
1
null
[ "ru", "en", "license:openrail", "region:us" ]
null
2023-06-04T13:16:48Z
--- license: openrail language: - ru - en ---
mrizalf7/xlm-r-qa-small-squad
mrizalf7
2023-07-03T19:50:09Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-07-03T18:15:49Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlm-r-qa-small-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-r-qa-small-squad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2394 | 1.0 | 5437 | 1.9701 | | 0.9683 | 2.0 | 10874 | 1.9800 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
PrakhAI/HelloWorld
PrakhAI
2023-07-03T19:22:35Z
0
0
null
[ "dataset:mnist", "license:gpl-3.0", "region:us" ]
null
2023-07-02T01:34:55Z
--- license: gpl-3.0 datasets: - mnist --- Flax handwritten digit (MNIST) classification model trained using https://colab.research.google.com/github/google/flax/blob/main/docs/getting_started.ipynb
andres-gv/model-cmi-1
andres-gv
2023-07-03T19:19:59Z
4
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2023-07-03T19:13:59Z
--- pipeline_tag: text-classification library_name: bertopic ---
practical-dreamer/rpgpt-7b-lora
practical-dreamer
2023-07-03T19:08:58Z
0
1
null
[ "dataset:practicaldreamer/RPGPT_PublicDomain-ShareGPT", "region:us" ]
null
2023-07-03T18:16:52Z
--- datasets: - practicaldreamer/RPGPT_PublicDomain-ShareGPT --- ## Introduction This is my first attempt at training a model for long form character interaction using asterisk roleplay format. There are plenty of general instruction/answer models but most focus on single responses between an ai and a human. My goal for this project is to more closely align the training data with CHARACTER interactions for roleplay. This model is trained on a small synthetic dataset of characters interacting through a variety of scenarios. The Characters, Scenarios and interactions were all generated by GPT4. Intended for research, creative writing, entertainment, DnD campaigns? fun! ## Train Summary [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ``` duration: ~1hr gpu: 1xA100 80GB epochs: 1.0 speed: 3e-5 sequence_len: 2048 gradient_accumulation_steps: 32 wandb: https://wandb.ai/practicaldreamer/rpgpt/runs/bvyawfro ``` *Please see the documentation folder for more information* ## Usage This LoRA was trained for use with **Neko-Institute-of-Science/LLaMA-7B-HF** Please follow the prompt format outlined below. *Hint: If you're not sure what to put for your character description (or you're lazy) just ask chatgpt to generate it for you! Example:* ``` Generate a short character description for Dr. Watson (The Adventures of Sherlock Holmes) that includes gender, age, MBTI and speech accent using 30 words or less. ``` ## Prompt Format Context/Memory: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters) ## Characters <User-Character Name> (<User-Character Universe>): <User-Character Description> <Bot-Character Name> (Bot-Character Universe): <Bot-Character Description> ## Scenario <Scenario Description> ASSISTANT: ``` Turn Template: ``` <User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* ... ``` ## Example ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters) ## Characters Baloo (The Jungle Book): Male, middle-aged bear, ENFP, primarily American accent with slight Indian inflections. Wise, carefree, and friendly, he teaches Mowgli the ways of the jungle. The Queen of Hearts (Alice's Adventures in Wonderland): Female, middle-aged, ENTJ, British English accent; imperious, temperamental, commanding, but quick to change from anger to warmth - the quintessential capricious monarch. ## Scenario In an enchanted glade deep within Wonderland, Baloo finds himself separated from Mowgli and in the Queen of Hearts' territory. Having heard tales of this mercurial ruler, Baloo is intrigued but cautious. To his surprise, the Queen of Hearts approaches him with a broken heart - she has been betrayed by one of her closest confidants. The Queen initially sees Baloo as an opportunity to vent her frustrations and seek validation for her hurt feelings. Baloo, however, quickly recognizes that the Queen's explosive temperament is driving others away and contributing to her own isolation.\n\nUsing his wisdom and patience, Baloo tries to teach the Queen the value of empathy and understanding in forming meaningful relationships. Through their dialogue, they explore their contrasting attitudes toward power, authority, and friendship. The Queen learns that it is not through fear and command that she can find true loyalty, but rather by treating others with kindness. In this somber environment where both characters are out of their comfort zones, can Baloo help the Queen mend her own heart by opening up about his experiences with Mowgli? Will the Queen allow herself to be vulnerable enough to learn from the wise bear? And can these two unlikely friends find common ground in this enchanted glade within Wonderland? ASSISTANT: Baloo: *stumbles into the enchanted glade, looking around curiously* Well, I must've taken a wrong turn back there. Mowgli! *calls out for Mowgli* The Queen of Hearts: *storms into the glade, clutching a broken heart symbol, face contorted in anger* Off with their heads! Everyone's heads! Baloo: *gently scratches his head and approaches the Queen cautiously* Uh, excuse me, ma'am? I couldn't help but notice that you seem rather... upset. The Queen of Hearts: *sniffs haughtily and scowls at Baloo* And who are you to pry into my affairs? You should be bowing to your queen! Baloo: *bows graciously but maintains eye contact* Begging your pardon, Your Majesty. My name is Baloo, and I've heard tales of your... unique temperament. The Queen of Hearts: ``` ## Thanks openaccess-ai-collective https://huggingface.co/openaccess-ai-collective --- license: mit ---
practical-dreamer/rpgpt-13b-lora
practical-dreamer
2023-07-03T19:08:32Z
0
2
null
[ "dataset:practicaldreamer/RPGPT_PublicDomain-ShareGPT", "region:us" ]
null
2023-07-03T17:17:03Z
--- datasets: - practicaldreamer/RPGPT_PublicDomain-ShareGPT --- ## Introduction This is my first attempt at training a model for long form character interaction using asterisk roleplay format. There are plenty of general instruction/answer models but most focus on single responses between an ai and a human. My goal for this project is to more closely align the training data with CHARACTER interactions for roleplay. This model is trained on a small synthetic dataset of characters interacting through a variety of scenarios. The Characters, Scenarios and interactions were all generated by GPT4. Intended for research, creative writing, entertainment, DnD campaigns? fun! ## Train Summary [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ``` duration: ~1.5hrs gpu: 1xA100 80GB epochs: 1.0 speed: 3e-5 sequence_len: 2048 gradient_accumulation_steps: 32 wandb: https://wandb.ai/practicaldreamer/rpgpt/runs/b3sznjpz ``` *Please see the documentation folder for more information* ## Usage This LoRA was trained for use with **Neko-Institute-of-Science/LLaMA-13B-HF** Please follow the prompt format outlined below. *Hint: If you're not sure what to put for your character description (or you're lazy) just ask chatgpt to generate it for you! Example:* ``` Generate a short character description for Dr. Watson (The Adventures of Sherlock Holmes) that includes gender, age, MBTI and speech accent using 30 words or less. ``` ## Prompt Format Context/Memory: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters) ## Characters <User-Character Name> (<User-Character Universe>): <User-Character Description> <Bot-Character Name> (Bot-Character Universe): <Bot-Character Description> ## Scenario <Scenario Description> ASSISTANT: ``` Turn Template: ``` <User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* <Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\* ... ``` ## Example ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters) ## Characters Baloo (The Jungle Book): Male, middle-aged bear, ENFP, primarily American accent with slight Indian inflections. Wise, carefree, and friendly, he teaches Mowgli the ways of the jungle. The Queen of Hearts (Alice's Adventures in Wonderland): Female, middle-aged, ENTJ, British English accent; imperious, temperamental, commanding, but quick to change from anger to warmth - the quintessential capricious monarch. ## Scenario In an enchanted glade deep within Wonderland, Baloo finds himself separated from Mowgli and in the Queen of Hearts' territory. Having heard tales of this mercurial ruler, Baloo is intrigued but cautious. To his surprise, the Queen of Hearts approaches him with a broken heart - she has been betrayed by one of her closest confidants. The Queen initially sees Baloo as an opportunity to vent her frustrations and seek validation for her hurt feelings. Baloo, however, quickly recognizes that the Queen's explosive temperament is driving others away and contributing to her own isolation.\n\nUsing his wisdom and patience, Baloo tries to teach the Queen the value of empathy and understanding in forming meaningful relationships. Through their dialogue, they explore their contrasting attitudes toward power, authority, and friendship. The Queen learns that it is not through fear and command that she can find true loyalty, but rather by treating others with kindness. In this somber environment where both characters are out of their comfort zones, can Baloo help the Queen mend her own heart by opening up about his experiences with Mowgli? Will the Queen allow herself to be vulnerable enough to learn from the wise bear? And can these two unlikely friends find common ground in this enchanted glade within Wonderland? ASSISTANT: Baloo: *stumbles into the enchanted glade, looking around curiously* Well, I must've taken a wrong turn back there. Mowgli! *calls out for Mowgli* The Queen of Hearts: *storms into the glade, clutching a broken heart symbol, face contorted in anger* Off with their heads! Everyone's heads! Baloo: *gently scratches his head and approaches the Queen cautiously* Uh, excuse me, ma'am? I couldn't help but notice that you seem rather... upset. The Queen of Hearts: *sniffs haughtily and scowls at Baloo* And who are you to pry into my affairs? You should be bowing to your queen! Baloo: *bows graciously but maintains eye contact* Begging your pardon, Your Majesty. My name is Baloo, and I've heard tales of your... unique temperament. The Queen of Hearts: ``` ## Thanks openaccess-ai-collective https://huggingface.co/openaccess-ai-collective --- license: mit ---
RajkNakka/Taxi-v3
RajkNakka
2023-07-03T19:01:50Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T19:01:48Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="RajkNakka/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
headflame02/Achaz
headflame02
2023-07-03T18:56:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-03T18:53:43Z
--- license: creativeml-openrail-m ---
geekyrakshit/DeepLabV3-Plus
geekyrakshit
2023-07-03T18:51:23Z
60
0
keras
[ "keras", "segmentation", "tensorflow", "cityscapes", "arxiv:1802.02611", "region:us" ]
null
2023-07-03T17:32:36Z
--- metrics: - accuracy - mean_iou tags: - segmentation - keras - tensorflow - cityscapes --- # DeepLabV3-Plus Keras implementation of the DeepLabV3+ model as proposed by the paper [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)(ECCV 2018). The models were trained on the fine-annotations set of the [Cityscapes dataset](cityscapes-dataset.com) for creating presets for [this PR](https://github.com/keras-team/keras-cv/pull/1831) on the `keras-cv` repository. **Weights & Biases Dashboard:** https://wandb.ai/geekyrakshit/deeplabv3-keras-cv
anujsahani01/finetuned_mbart
anujsahani01
2023-07-03T18:40:55Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-17T14:19:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: finetuned_Mbart results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_Mbart This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 10000 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Shularp/TestHelsinkimulEnJpTh02
Shularp
2023-07-03T18:39:09Z
31
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-03T11:53:35Z
--- tags: - generated_from_trainer model-index: - name: TestHelsinkimulEnJpTh02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TestHelsinkimulEnJpTh02 This model is a fine-tuned version of [Shularp/TestHelsinkimulEnJpTh02](https://huggingface.co/Shularp/TestHelsinkimulEnJpTh02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.4364 | 1.0 | 4846 | 0.2666 | | 0.1094 | 2.0 | 9692 | 0.2277 | | 0.0484 | 3.0 | 14538 | 0.1940 | | 0.0111 | 4.0 | 19384 | 0.1749 | | 0.0105 | 5.0 | 24230 | 0.1630 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Enymy/t5-base-feedback-generator
Enymy
2023-07-03T18:38:49Z
102
0
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-03T18:36:24Z
--- tags: - generated_from_keras_callback model-index: - name: t5-base-feedback-generator results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-feedback-generator This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
falkne/justification
falkne
2023-07-03T18:20:44Z
2
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:42Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/justification` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/justification", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/argumentative
falkne
2023-07-03T18:20:37Z
2
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:36Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/argumentative` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/argumentative", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/story
falkne
2023-07-03T18:20:36Z
1
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:34Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/story` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/story", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/respect
falkne
2023-07-03T18:20:33Z
2
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:32Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/respect` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/respect", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/clarity
falkne
2023-07-03T18:20:31Z
2
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:30Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/clarity` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/clarity", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/reasonableness
falkne
2023-07-03T18:20:30Z
3
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:28Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/reasonableness` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/reasonableness", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/negEmotion
falkne
2023-07-03T18:20:24Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:23Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/negEmotion` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/negEmotion", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/posEmotion
falkne
2023-07-03T18:20:20Z
1
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:19Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/posEmotion` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/posEmotion", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/interactivity
falkne
2023-07-03T18:20:18Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:17Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/interactivity` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/interactivity", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/empathie
falkne
2023-07-03T18:20:14Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:20:13Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/empathie` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/empathie", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/impact
falkne
2023-07-03T18:20:09Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:11:35Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/impact` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/impact", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
falkne/effectiveness
falkne
2023-07-03T18:20:09Z
2
0
adapter-transformers
[ "adapter-transformers", "adapterhub:argument/quality", "roberta", "region:us" ]
null
2023-07-03T18:17:55Z
--- tags: - adapterhub:argument/quality - roberta - adapter-transformers --- # Adapter `falkne/effectiveness` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("falkne/effectiveness", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
BBAI/qlora-koalpaca-polyglot-12.8b-50step
BBAI
2023-07-03T18:06:07Z
5
0
peft
[ "peft", "region:us" ]
null
2023-06-22T06:33:23Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
JMGaloDoido/ppo-Huggy
JMGaloDoido
2023-07-03T18:04:01Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-03T16:41:47Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: JMGaloDoido/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
osiria/bert-tweet-base-italian-uncased
osiria
2023-07-03T17:57:30Z
173
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "it", "arxiv:1810.04805", "arxiv:2209.07562", "arxiv:2010.05609", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-29T17:25:55Z
--- license: apache-2.0 language: - it widget: - text: "una fantastica [MASK] di #calcio! grande prestazione del mister e della squadra" example_title: "Example 1" - text: "il governo [MASK] dovrebbe fare politica, non soltanto propaganda! #vergogna" example_title: "Example 2" - text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del [MASK] italiano #oscar #awards" example_title: "Example 3" --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BERT-TWEET</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, obtained using <b>TwHIN-BERT</b> <b>[2]</b> ([twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[3]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 110M parameters, a vocabulary of 30.520 tokens, and a size of ~440 MB. <h3>Quick usage</h3> ```python from transformers import BertTokenizerFast, BertModel tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-base-italian-uncased") model = BertModel.from_pretrained("osiria/bert-tweet-base-italian-uncased") ``` Here you can find the find the model already fine-tuned on Sentiment Analysis: https://huggingface.co/osiria/bert-tweet-italian-uncased-sentiment <h3>References</h3> [1] https://arxiv.org/abs/1810.04805 [2] https://arxiv.org/abs/2209.07562 [3] https://arxiv.org/abs/2010.05609 <h3>Limitations</h3> This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style. It might show limitations when it comes to longer and more structured text, or domain-specific text. <h3>License</h3> The model is released under <b>Apache-2.0</b> license
hopkins/eng-kor-simcse.dev2.44k
hopkins
2023-07-03T17:51:10Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T17:38:07Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-kor-simcse.dev2.44k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-simcse.dev2.44k This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9818 - Bleu: 7.4953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Officialletai/Taxi-v3
Officialletai
2023-07-03T17:46:05Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T17:43:57Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Officialletai/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hopkins/eng-ind-simcse.dev2.44k
hopkins
2023-07-03T17:37:40Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T17:19:41Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-ind-simcse.dev2.44k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-ind-simcse.dev2.44k This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7531 - Bleu: 22.6202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
cdreetz/codeparrot-ds2
cdreetz
2023-07-03T17:31:45Z
23
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-15T19:08:28Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds2 GPT-2 style trained on a filtered set of The Stack, specific to data science related code. Things like pandas, numpy, matplotlib, etc. - Loss: 1.0584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.2038 | 0.01 | 500 | 2.1062 | | 2.0551 | 0.02 | 1000 | 2.0109 | | 1.9622 | 0.02 | 1500 | 1.9219 | | 1.9512 | 0.03 | 2000 | 1.8461 | | 1.8817 | 0.04 | 2500 | 1.7903 | | 1.8341 | 0.05 | 3000 | 1.7401 | | 1.7877 | 0.05 | 3500 | 1.7022 | | 1.7586 | 0.06 | 4000 | 1.6694 | | 1.7271 | 0.07 | 4500 | 1.6457 | | 1.7034 | 0.08 | 5000 | 1.6193 | | 1.6756 | 0.08 | 5500 | 1.5978 | | 1.6576 | 0.09 | 6000 | 1.5772 | | 1.6377 | 0.1 | 6500 | 1.5611 | | 1.6211 | 0.11 | 7000 | 1.5453 | | 1.6033 | 0.11 | 7500 | 1.5317 | | 1.591 | 0.12 | 8000 | 1.5193 | | 1.5765 | 0.13 | 8500 | 1.5053 | | 1.5661 | 0.14 | 9000 | 1.4966 | | 1.5548 | 0.15 | 9500 | 1.4846 | | 1.5429 | 0.15 | 10000 | 1.4729 | | 1.5347 | 0.16 | 10500 | 1.4641 | | 1.5215 | 0.17 | 11000 | 1.4557 | | 1.5151 | 0.18 | 11500 | 1.4454 | | 1.5059 | 0.18 | 12000 | 1.4381 | | 1.499 | 0.19 | 12500 | 1.4288 | | 1.4906 | 0.2 | 13000 | 1.4210 | | 1.4849 | 0.21 | 13500 | 1.4143 | | 1.4765 | 0.21 | 14000 | 1.4085 | | 1.4708 | 0.22 | 14500 | 1.4026 | | 1.4602 | 0.23 | 15000 | 1.3936 | | 1.4533 | 0.24 | 15500 | 1.3896 | | 1.4523 | 0.25 | 16000 | 1.3818 | | 1.4415 | 0.25 | 16500 | 1.3748 | | 1.4417 | 0.26 | 17000 | 1.3701 | | 1.4311 | 0.27 | 17500 | 1.3645 | | 1.4282 | 0.28 | 18000 | 1.3585 | | 1.4223 | 0.28 | 18500 | 1.3531 | | 1.4165 | 0.29 | 19000 | 1.3473 | | 1.4105 | 0.3 | 19500 | 1.3419 | | 1.3993 | 0.31 | 20000 | 1.3374 | | 1.4034 | 0.31 | 20500 | 1.3322 | | 1.3982 | 0.32 | 21000 | 1.3278 | | 1.3951 | 0.33 | 21500 | 1.3225 | | 1.3806 | 0.34 | 22000 | 1.3180 | | 1.3781 | 0.34 | 22500 | 1.3121 | | 1.3761 | 0.35 | 23000 | 1.3082 | | 1.3662 | 0.36 | 23500 | 1.3038 | | 1.3631 | 0.37 | 24000 | 1.2995 | | 1.3549 | 0.38 | 24500 | 1.2955 | | 1.3577 | 0.38 | 25000 | 1.2912 | | 1.3498 | 0.39 | 25500 | 1.2851 | | 1.3428 | 0.4 | 26000 | 1.2807 | | 1.342 | 0.41 | 26500 | 1.2768 | | 1.3365 | 0.41 | 27000 | 1.2720 | | 1.3313 | 0.42 | 27500 | 1.2678 | | 1.3309 | 0.43 | 28000 | 1.2629 | | 1.3221 | 0.44 | 28500 | 1.2594 | | 1.3214 | 0.44 | 29000 | 1.2558 | | 1.3099 | 0.45 | 29500 | 1.2510 | | 1.31 | 0.46 | 30000 | 1.2449 | | 1.31 | 0.47 | 30500 | 1.2414 | | 1.305 | 0.48 | 31000 | 1.2390 | | 1.2975 | 0.48 | 31500 | 1.2358 | | 1.2882 | 0.49 | 32000 | 1.2311 | | 1.2831 | 0.5 | 32500 | 1.2251 | | 1.2836 | 0.51 | 33000 | 1.2212 | | 1.2817 | 0.51 | 33500 | 1.2178 | | 1.2772 | 0.52 | 34000 | 1.2130 | | 1.2651 | 0.53 | 34500 | 1.2080 | | 1.2683 | 0.54 | 35000 | 1.2048 | | 1.2581 | 0.54 | 35500 | 1.1999 | | 1.263 | 0.55 | 36000 | 1.1972 | | 1.255 | 0.56 | 36500 | 1.1924 | | 1.2466 | 0.57 | 37000 | 1.1884 | | 1.2448 | 0.57 | 37500 | 1.1860 | | 1.2413 | 0.58 | 38000 | 1.1804 | | 1.2362 | 0.59 | 38500 | 1.1782 | | 1.2309 | 0.6 | 39000 | 1.1732 | | 1.2289 | 0.61 | 39500 | 1.1687 | | 1.2208 | 0.61 | 40000 | 1.1649 | | 1.2225 | 0.62 | 40500 | 1.1605 | | 1.2178 | 0.63 | 41000 | 1.1555 | | 1.208 | 0.64 | 41500 | 1.1533 | | 1.2069 | 0.64 | 42000 | 1.1490 | | 1.206 | 0.65 | 42500 | 1.1453 | | 1.2013 | 0.66 | 43000 | 1.1414 | | 1.2003 | 0.67 | 43500 | 1.1374 | | 1.1867 | 0.67 | 44000 | 1.1337 | | 1.187 | 0.68 | 44500 | 1.1302 | | 1.188 | 0.69 | 45000 | 1.1270 | | 1.179 | 0.7 | 45500 | 1.1237 | | 1.1866 | 0.71 | 46000 | 1.1204 | | 1.173 | 0.71 | 46500 | 1.1173 | | 1.1706 | 0.72 | 47000 | 1.1134 | | 1.1645 | 0.73 | 47500 | 1.1099 | | 1.1641 | 0.74 | 48000 | 1.1063 | | 1.1623 | 0.74 | 48500 | 1.1032 | | 1.1561 | 0.75 | 49000 | 1.1006 | | 1.1531 | 0.76 | 49500 | 1.0977 | | 1.1569 | 0.77 | 50000 | 1.0950 | | 1.1505 | 0.77 | 50500 | 1.0927 | | 1.1473 | 0.78 | 51000 | 1.0902 | | 1.1428 | 0.79 | 51500 | 1.0870 | | 1.1412 | 0.8 | 52000 | 1.0844 | | 1.1452 | 0.8 | 52500 | 1.0823 | | 1.1391 | 0.81 | 53000 | 1.0805 | | 1.1329 | 0.82 | 53500 | 1.0783 | | 1.1295 | 0.83 | 54000 | 1.0764 | | 1.125 | 0.84 | 54500 | 1.0746 | | 1.1295 | 0.84 | 55000 | 1.0730 | | 1.1247 | 0.85 | 55500 | 1.0711 | | 1.1225 | 0.86 | 56000 | 1.0696 | | 1.1235 | 0.87 | 56500 | 1.0680 | | 1.1192 | 0.87 | 57000 | 1.0670 | | 1.1189 | 0.88 | 57500 | 1.0654 | | 1.1196 | 0.89 | 58000 | 1.0646 | | 1.1152 | 0.9 | 58500 | 1.0635 | | 1.1133 | 0.9 | 59000 | 1.0628 | | 1.1126 | 0.91 | 59500 | 1.0619 | | 1.1142 | 0.92 | 60000 | 1.0610 | | 1.1112 | 0.93 | 60500 | 1.0605 | | 1.1137 | 0.93 | 61000 | 1.0599 | | 1.1127 | 0.94 | 61500 | 1.0595 | | 1.1111 | 0.95 | 62000 | 1.0592 | | 1.1121 | 0.96 | 62500 | 1.0588 | | 1.1114 | 0.97 | 63000 | 1.0587 | | 1.1121 | 0.97 | 63500 | 1.0585 | | 1.1078 | 0.98 | 64000 | 1.0584 | | 1.1104 | 0.99 | 64500 | 1.0584 | | 1.1057 | 1.0 | 65000 | 1.0584 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
Sourabh2/spaceinvandernoframeship-v2
Sourabh2
2023-07-03T17:28:00Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T17:26:59Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 229.50 +/- 112.19 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sourabh2 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sourabh2 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sourabh2 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
DanialAmin/InsuranceLLM
DanialAmin
2023-07-03T17:20:10Z
0
0
null
[ "region:us" ]
null
2023-07-03T17:15:38Z
--- license: tii-falcon-llm ---
hopkins/eng-deu-simcse.dev2.44k
hopkins
2023-07-03T17:19:14Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T17:05:25Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-deu-simcse.dev2.44k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-simcse.dev2.44k This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6420 - Bleu: 21.3894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/eng-deu-delfy
hopkins
2023-07-03T16:49:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T16:03:32Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-deu-delfy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-delfy This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6917 - Bleu: 19.9632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
felipec23/open-llama-3b
felipec23
2023-07-03T16:45:32Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-03T16:45:30Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Wongstein/vide-noir
Wongstein
2023-07-03T16:39:18Z
175
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text-generation-inference", "en", "dataset:amazon_us_reviews", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-03T16:13:16Z
--- license: creativeml-openrail-m datasets: - amazon_us_reviews language: - en tags: - text-generation-inference ---
Sandrro/text_to_subfunction_v2
Sandrro
2023-07-03T16:10:33Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-03T13:59:39Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: text_to_subfunction_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_to_subfunction_v2 This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9853 - F1: 0.1413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.1842 | 1.0 | 2525 | 5.0971 | 0.0271 | | 4.5066 | 2.0 | 5050 | 4.5376 | 0.0676 | | 4.0497 | 3.0 | 7575 | 4.2116 | 0.1053 | | 3.7356 | 4.0 | 10100 | 4.0381 | 0.1318 | | 3.6168 | 5.0 | 12625 | 3.9853 | 0.1413 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.1.0.dev20230414+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
khalidbutt/k
khalidbutt
2023-07-03T16:09:24Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-07-03T16:09:24Z
--- license: bigscience-bloom-rail-1.0 ---
FabriLluvia/BOT
FabriLluvia
2023-07-03T16:03:08Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "fill-mask", "es", "en", "dataset:OpenAssistant/oasst1", "dataset:fka/awesome-chatgpt-prompts", "license:apache-2.0", "region:us" ]
fill-mask
2023-07-03T16:01:17Z
--- license: apache-2.0 datasets: - OpenAssistant/oasst1 - fka/awesome-chatgpt-prompts language: - es - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: fill-mask tags: - code ---
dicta-il/BEREL_2.0
dicta-il
2023-07-03T15:36:58Z
425
2
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "he", "arxiv:2208.01875", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-26T22:40:47Z
--- license: apache-2.0 language: - he library_name: transformers tags: - bert --- # Introducing BEREL 2.0 - New and Improved BEREL: BERT Embeddings for Rabbinic-Encoded Language When using BEREL 2.0, please reference: Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Cheyn Shmuel Shmidman, Eli Handel, Moshe Koppel, "Introducing BEREL: BERT Embeddings for Rabbinic-Encoded Language", Aug 2022 [arXiv:2208.01875] 1. Usage: ```python from transformers import AutoTokenizer, BertForMaskedLM tokenizer = AutoTokenizer.from_pretrained('dicta-il/BEREL_2.0') model = BertForMaskedLM.from_pretrained('dicta-il/BEREL_2.0') # for evaluation, disable dropout model.eval() ``` > NOTE: This code will **not** work and provide bad results if you use `BertTokenizer`. Please use `AutoTokenizer` or `BertTokenizerFast`. 2. Demo site: You can experiment with the model in a GUI interface here: https://dicta-bert-demo.netlify.app/?genre=rabbinic - The main part of the GUI consists of word buttons visualizing the tokenization of the sentences. Clicking on a button masks it, and then three BEREL word predictions are shown. Clicking on that bubble expands it to 10 predictions; alternatively, ctrl-clicking on that initial bubble expands to 30 predictions. - Ctrl-clicking adjacent word buttons combines them into a single token for the mask. - The edit box on top contains the input sentence; this can be modified at will, and the word-buttons will adjust as relevant.
arver/code_gen
arver
2023-07-03T15:21:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-03T15:18:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
dcarpintero/q-Taxi-v3
dcarpintero
2023-07-03T15:02:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T15:01:48Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dcarpintero/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hopkins/eng-fra-wsample.49
hopkins
2023-07-03T14:56:11Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T14:37:16Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-wsample.49 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-wsample.49 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1184 - Bleu: 33.0590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
kresnik/wav2vec2-large-xlsr-korean
kresnik
2023-07-03T14:55:40Z
1,123,517
38
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "ko", "dataset:kresnik/zeroth_korean", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ko datasets: - kresnik/zeroth_korean tags: - speech - audio - automatic-speech-recognition license: apache-2.0 model-index: - name: 'Wav2Vec2 XLSR Korean' results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Zeroth Korean type: kresnik/zeroth_korean args: clean metrics: - name: Test WER type: wer value: 4.74 - name: Test CER type: cer value: 1.78 --- ## Evaluation on Zeroth-Korean ASR corpus [Google colab notebook(Korean)](https://colab.research.google.com/github/indra622/tutorials/blob/master/wav2vec2_korean_tutorial.ipynb) ``` from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import soundfile as sf import torch from jiwer import wer processor = Wav2Vec2Processor.from_pretrained("kresnik/wav2vec2-large-xlsr-korean") model = Wav2Vec2ForCTC.from_pretrained("kresnik/wav2vec2-large-xlsr-korean").to('cuda') ds = load_dataset("kresnik/zeroth_korean", "clean") test_ds = ds['test'] def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch test_ds = test_ds.map(map_to_array) def map_to_pred(batch): inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") with torch.no_grad(): logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = test_ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` ### Expected WER: 4.74% ### Expected CER: 1.78%
LukeMoore11/Big-Benjamin
LukeMoore11
2023-07-03T14:44:11Z
112
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "en", "dataset:LukeMoore11/autotrain-data-second-attempt", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-06-21T22:08:19Z
--- tags: - summarization language: - en widget: - text: "Enter legal document..." datasets: - LukeMoore11/autotrain-data-second-attempt co2_eq_emissions: emissions: 67.54051067286701 --- ## Validation Metrics - Loss: 1.379 - Rouge1: 24.817 - Rouge2: 20.238 - RougeL: 24.044 - RougeLsum: 24.222
Phips/q-FrozenLake-v1-4x4-noSlippery
Phips
2023-07-03T14:42:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T14:42:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Phips/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TootToot/ppo-LunarLander-v2-v2
TootToot
2023-07-03T14:19:39Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T14:19:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.50 +/- 24.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
toloka/gpt2-large-supervised-prompt-writing
toloka
2023-07-03T14:05:32Z
28
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-13T13:30:02Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: gpt2-sweep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-sweep This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0808 - Accuracy: 0.8556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.294477077303931e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.4827 | 0.19 | 1000 | 2.4565 | 0.8520 | | 2.6468 | 0.37 | 2000 | 2.3303 | 0.8530 | | 2.5106 | 0.56 | 3000 | 2.2487 | 0.8537 | | 2.0732 | 0.74 | 4000 | 2.2020 | 0.8541 | | 2.159 | 0.93 | 5000 | 2.1594 | 0.8545 | | 1.856 | 1.12 | 6000 | 2.1518 | 0.8548 | | 1.9138 | 1.3 | 7000 | 2.1261 | 0.8551 | | 1.8055 | 1.49 | 8000 | 2.1126 | 0.8552 | | 2.0385 | 1.67 | 9000 | 2.1008 | 0.8554 | | 1.9648 | 1.86 | 10000 | 2.0858 | 0.8555 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.0+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
iammartian0/distilhubert-finetuned-gtzan
iammartian0
2023-07-03T13:52:49Z
159
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-03T10:17:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5528 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1578 | 0.99 | 56 | 2.1203 | 0.55 | | 1.6815 | 2.0 | 113 | 1.6607 | 0.57 | | 1.2921 | 2.99 | 169 | 1.2421 | 0.64 | | 1.0324 | 4.0 | 226 | 1.0260 | 0.7 | | 0.8661 | 4.99 | 282 | 0.8973 | 0.7 | | 0.6192 | 6.0 | 339 | 0.7420 | 0.79 | | 0.5437 | 6.99 | 395 | 0.6951 | 0.8 | | 0.4917 | 8.0 | 452 | 0.6996 | 0.78 | | 0.3868 | 8.99 | 508 | 0.6648 | 0.81 | | 0.3816 | 10.0 | 565 | 0.6584 | 0.79 | | 0.1935 | 10.99 | 621 | 0.6101 | 0.84 | | 0.128 | 12.0 | 678 | 0.5445 | 0.85 | | 0.1144 | 12.99 | 734 | 0.5703 | 0.84 | | 0.0828 | 14.0 | 791 | 0.5632 | 0.83 | | 0.0928 | 14.87 | 840 | 0.5528 | 0.84 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
renyulin/gpt-neo-1.3b-es-rlhf-step2500-peft
renyulin
2023-07-03T13:45:51Z
0
0
transformers
[ "transformers", "pytorch", "trl", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-03T13:45:49Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="renyulin//tmp/tmpdgiqbda4/renyulin/gpt-neo-1.3b-es-rlhf-step250-peft") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("renyulin//tmp/tmpdgiqbda4/renyulin/gpt-neo-1.3b-es-rlhf-step250-peft") model = AutoModelForCausalLMWithValueHead.from_pretrained("renyulin//tmp/tmpdgiqbda4/renyulin/gpt-neo-1.3b-es-rlhf-step250-peft") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
juliensimon/autotrain-food101-1471154053
juliensimon
2023-07-03T13:43:26Z
210
0
transformers
[ "transformers", "pytorch", "safetensors", "autotrain", "vision", "image-classification", "dataset:juliensimon/autotrain-data-food101", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
image-classification
2022-09-15T12:42:49Z
--- tags: - autotrain - vision - image-classification datasets: - juliensimon/autotrain-data-food101 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 179.11544810549532 --- # Usage ``` from transformers import pipeline p = pipeline("image-classification", model="juliensimon/autotrain-food101-1471154053") result = p("my_image.jpg") ``` # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1471154053 - CO2 Emissions (in grams): 179.1154 ## Validation Metrics - Loss: 0.301 - Accuracy: 0.915 - Macro F1: 0.915 - Micro F1: 0.915 - Weighted F1: 0.915 - Macro Precision: 0.917 - Micro Precision: 0.915 - Weighted Precision: 0.917 - Macro Recall: 0.915 - Micro Recall: 0.915 - Weighted Recall: 0.915
dcarpintero/q-FrozenLake-v1-4x4-noSlippery
dcarpintero
2023-07-03T13:41:08Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T13:41:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="dcarpintero/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AndreNasci/ppo-Huggy
AndreNasci
2023-07-03T13:26:19Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-03T13:26:09Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AndreNasci/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Khushnur/t5-base-end2end-questions-generation_eli_squad
Khushnur
2023-07-03T13:17:24Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5_cleaned_datav3_60k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-29T18:54:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5_cleaned_datav3_60k model-index: - name: t5-base-end2end-questions-generation_eli_squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_eli_squad This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5_cleaned_datav3_60k dataset. It achieves the following results on the evaluation set: - Loss: 2.3313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7426 | 0.26 | 100 | 2.4735 | | 2.305 | 0.52 | 200 | 2.4169 | | 2.2034 | 0.78 | 300 | 2.3887 | | 2.1562 | 1.04 | 400 | 2.3710 | | 2.0883 | 1.31 | 500 | 2.3574 | | 2.07 | 1.57 | 600 | 2.3492 | | 2.0595 | 1.83 | 700 | 2.3433 | | 2.0337 | 2.09 | 800 | 2.3384 | | 2.0012 | 2.35 | 900 | 2.3353 | | 2.0175 | 2.61 | 1000 | 2.3320 | | 2.0035 | 2.87 | 1100 | 2.3313 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
eskalofi/annchristinkh
eskalofi
2023-07-03T13:13:24Z
29
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T13:09:57Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### annchristinkh Dreambooth model trained by eskalofi with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
hiepnh/longchat-7b-16k-sharded
hiepnh
2023-07-03T13:10:23Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-03T10:45:56Z
a sharded version of https://huggingface.co/lmsys/longchat-7b-16k
veluchs/whisper-tiny-us
veluchs
2023-07-03T13:06:17Z
86
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-03T12:43:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-us results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.33943329397874855 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-us This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6329 - Wer Ortho: 0.3430 - Wer: 0.3394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0009 | 17.86 | 500 | 0.6329 | 0.3430 | 0.3394 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
coreml-community/coreml-aZovyaRPGArtistTools-v3VAE_cn
coreml-community
2023-07-03T13:04:23Z
0
2
null
[ "coreml", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-03T08:30:42Z
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML). - Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images. - `split_einsum` version is compatible with all compute unit options including Neural Engine. - `original` version is only compatible with `CPU & GPU` option. - Custom resolution versions are tagged accordingly. - The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model. - This model was converted with a `vae-encoder` for use with `image2image`. - This model is `fp16`. - Descriptions are posted as-is from original model source. - Not all features and/or results may be available in `CoreML` format. - This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). - This model does not include a `safety checker` (for NSFW content). - This model can be used with ControlNet. <br> # aZovyaRPGArtistTools_v3VAE_cn: Source(s): [CivitAI](https://civitai.com/models/8124/a-zovya-rpg-artist-tools) ## A-Zovya RPG Artist Tools A model with professional RPG industry artists in mind. Those working in video games, board and tabletop games as well as concept art and book covers should get good use from this model. The new version 3 packs in more training for creatures and rendering styles. [Detailed tutorial on how I get the results in the preview images.](https://civitai.com/models/34192/ultra-sharp-high-contrast-tutorial-vaeandupscaler)\ Check here if you're having trouble getting the same results. Initial generation size and VAE usage is key. You can prompt any style you need with these models, but the default aesthetic is listed for each of the models in this handy list. ## V3+VAE Same as V3 but with the added convenience of having a preset VAE baked in so you don't need to select that each time. Saves on vram usage and possible NaN errors. Speeds up workflow if that's the VAE you're going to use anyway. Compared to earlier versions, V3 has a stronger painterly style. High contrast and sharpness. Even more RPG knowledge. Do you have requests? I've been putting in many more hours lately with this. That's my problem, not yours. But if you'd like to tip me, buy me a beer. Beer encourages me to ignore work and make AI models instead. Tip and make a request. I'll give it a shot if I can. [Here at Ko-Fi](https://ko-fi.com/zovya)<br><br> ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b645364e-937d-4e3c-b9b4-eeea02652b87/width=450/preview1.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b34cac02-2821-41e5-8122-8219e0717f2a/width=450/93034-4020721394-zrpgstyle%20tiefling%20human%20devil%20hybrid%20winged%20succubus%20masterpiece-A-Zovya_RPG_Artist_Tools_V3.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c0d8ff8b-e736-49bd-acac-e99eace9dbaa/width=450/93251-430826836-zrpgstyle%20medieval%20portrait%20fantasy%20elderly%20evil%20wizard%20glorious-A-Zovya_RPG_Artist_Tools_V3.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cdd9ca73-9c5d-4c58-be99-2da68a47c722/width=450/92813-978970787-dungeons%20and%20dragons%20epic%20movie%20poster%20barbarian%20woman-A-Zovya_RPG_Artist_Tools_V3.jpeg)
GeniusVoice/bert-base-dutch-cased-finetuned-gem
GeniusVoice
2023-07-03T12:58:44Z
124
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - nl tags: - generated_from_trainer model_index: - name: bert-base-dutch-cased-finetuned-gem results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-gem This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 1.8767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7518 | 1.0 | 2133 | 1.8428 | | 1.5679 | 2.0 | 4266 | 1.8729 | | 1.3332 | 3.0 | 6399 | 1.8767 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
msladic/Reinforce-Pixelcopter-PLE-v0
msladic
2023-07-03T12:51:10Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T10:03:43Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.70 +/- 12.40 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hopkins/mbart-finetuned-eng-ind-longest
hopkins
2023-07-03T12:45:11Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T12:26:25Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-ind-longest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-longest This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7474 - Bleu: 21.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/mbart-finetuned-eng-ind-random
hopkins
2023-07-03T12:44:05Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T12:26:05Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-ind-random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-random This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7847 - Bleu: 21.5508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Tadorne/Political_Parrots
Tadorne
2023-07-03T12:37:20Z
0
0
null
[ "political", "gpt", "eu", "🇪🇺", "legal", "text-generation", "region:us" ]
text-generation
2023-07-02T13:30:44Z
--- tags: - political - gpt - eu - 🇪🇺 - legal pipeline_tag: text-generation --- # 🗳️ Political Parrots 🦜 This repository contain a fine-tune models of the LLaMA 7B. These LLM models was trained on the amendments made by the European Parliament's political groups during the 7th, 8th and current legislature. | Political Group | Number of amendments | |---------------------|----------------------------------| | SD | 55 149 | | EPP | 68 780 | | Renew | 17 379 | | ECR | 22 479 | ## An example of the results: Here is the actual prompt : > Amend the following text. > paternity leave means leave from work for fathers to be taken on the occasion of the birth of a child __The output of the SD model (Socialists & Democrats):__ > paternity leave means leave from work for fathers **, mothers and adoptive parents** to be taken on the occasion of the birth **or adoption** of a child __The output of the EPP model (European People's Party Group):__ > paternity leave means leave from work **of at least ten working days** for fathers to be taken on the occasion of the birth of a child, **in accordance with national law , in order to enable the father to take care of the child and the mother .**
renatoneto14/HuggyTraining
renatoneto14
2023-07-03T12:29:30Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-03T12:28:24Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: renatoneto14/HuggyTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
hopkins/mbart-finetuned-eng-deu-longest
hopkins
2023-07-03T12:25:56Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-03T12:06:22Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-deu-longest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-deu-longest This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6322 - Bleu: 20.9315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
napoleonthethird/wav2vec2-base-libris-prime
napoleonthethird
2023-07-03T12:21:45Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-01T13:52:21Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-libris-prime results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-libris-prime This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Heitechsoft/FalconAlpaca-7B
Heitechsoft
2023-07-03T12:17:56Z
14
1
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-30T11:34:06Z
--- license: apache-2.0 --- # Model Card for FalconAlpaca <!-- Provide a quick summary of what the model is/does. --> FalconAlpaca is Falcon-7B trained on the [Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) ## Model Details This model was an attempt to influence the learned outputs of Falcon-7B to adapt the outputs to become more information-rich and focused. Trained using [Lit GPT](https://github.com/Lightning-AI/lit-gpt), the model took 2 hours to train on 1 4xA6000 node. ### Model Description - **License:** [Apache 2.0] - **Finetuned from model :** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) ### Model Sources [Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) ### Out-of-Scope Use This model is not intended for anything but testing purposes. There have been no attempts to control/remove bias, toxicity, or any other form of potentially dangerous or harmful messages. ## Bias, Risks, and Limitations No effort was made to remove any wrong or harmful information from Falcon-7B or the Alpaca dataset. Any risks and limitations from either of those datasets/models carry over to this project as well. ## How to Get Started with the Model Download and install libraries for [Lit GPT](https://github.com/Lightning-AI/lit-gpt) ```sh python generate/adapter_v2.py \ --adapter_path path/to/model/lit_model_adapter_finetuned.pth \ --checkpoint_dir path/to/model \ --prompt "What temperature should I cook pork at to ensure it is safe?" ``` This uses around 14GB of VRAM. If you need to use less VRAM, you can add the parameters ``` --quantize llm.int8 ``` or ``` --quantize gptq.int4 ``` ### Training Data [Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) #### Training Hyperparameters The defaults were as follows ``` learning_rate = 9e-3 batch_size = 32 micro_batch_size = 2 gradient_accumulation_iters = 16 epoch_size = 50000 num_epochs = 5 max_iters = 125000 weight_decay = 0.02 warmup_iters = 50000 ``` ## More Information [HeitechSoft](https://heitechsoft.com/blog/heitechsoft-s-falcon-7b-fine-tuned-model-paves-the-way-for-advanced-ai-chatbots)
Tverous/gpt-j-claim-generator
Tverous
2023-07-03T12:15:25Z
4
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "generated_from_trainer", "dataset:anli", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T01:03:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - anli metrics: - rouge model-index: - name: gpt-j-claim-generator results: - task: name: Causal Language Modeling type: text-generation dataset: name: anli type: anli config: plain_text split: dev_r3 args: plain_text metrics: - name: Rouge1 type: rouge value: 0.8913860940628431 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-j-claim-generator This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on the anli dataset. It achieves the following results on the evaluation set: - Loss: 0.0232 - Rouge1: 0.8914 - Rouge2: 0.8240 - Rougel: 0.8863 - Rougelsum: 0.8864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 36 - total_eval_batch_size: 3 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 0.013 | 1.79 | 5000 | 0.0200 | 0.8921 | 0.8194 | 0.8859 | 0.8860 | | 0.0085 | 3.58 | 10000 | 0.0232 | 0.8914 | 0.8240 | 0.8863 | 0.8864 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
DEplain/trimmed_mbart_sents_apa_web
DEplain
2023-07-03T12:09:30Z
9
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "text simplification", "plain language", "easy-to-read language", "sentence simplification", "de", "dataset:DEplain/DEplain-APA-sent", "dataset:DEplain/DEplain-web-sent", "arxiv:2305.18939", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-01T14:45:33Z
--- datasets: - DEplain/DEplain-APA-sent - DEplain/DEplain-web-sent language: - de metrics: - sari - bleu - bertscore library_name: transformers pipeline_tag: text2text-generation tags: - text simplification - plain language - easy-to-read language - sentence simplification --- # DEplain German Text Simplification This model belongs to the experiments done at the work of Stodden, Momen, Kallmeyer (2023). ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939) In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics. Detailed documentation can be found on this GitHub repository [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain) ### Model Description The model is a finetuned checkpoint of the pre-trained mBART model `mbart-large-cc25`. With a trimmed vocabulary to the most frequent 30k words in the German language. The model was finetuned towards the task of German text simplification of sentences. The finetuning dataset included manually aligned sentences from the datasets `DEplain-APA-sent` and `DEplain-web-sent-manual-open`
teoha/openai-whisper-medium-LORA-ja
teoha
2023-07-03T12:08:06Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-02T20:16:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
WinterDP/ppo-Huggy
WinterDP
2023-07-03T12:05:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-03T12:05:51Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: WinterDP/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
velascoluis/falcon7b-instruct-database-ft
velascoluis
2023-07-03T11:50:55Z
0
0
null
[ "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-02T19:45:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: falcon7b-instruct-database-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7b-instruct-database-ft This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
searde/model-financial-documents-3
searde
2023-07-03T11:46:05Z
109
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:searde/dataset-financial-documents-3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-29T08:20:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - searde/dataset-financial-documents-3 metrics: - rouge model-index: - name: tst-summarization results: - task: name: Summarization type: summarization dataset: name: searde/dataset-financial-documents-3 3.0.0 type: searde/dataset-financial-documents-3 config: 3.0.0 split: validation args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 14.9574 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tst-summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the searde/dataset-financial-documents-3 3.0.0 dataset. It achieves the following results on the evaluation set: - Loss: 3.0505 - Rouge1: 14.9574 - Rouge2: 0.0 - Rougel: 8.4517 - Rougelsum: 12.4858 - Gen Len: 63.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
ayushutkarsh/t3
ayushutkarsh
2023-07-03T11:35:55Z
51
6
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "conversational", "en", "dataset:McGill-NLP/FaithDial", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-02T06:07:50Z
--- license: apache-2.0 datasets: - McGill-NLP/FaithDial language: - en metrics: - bleu - bertscore - accuracy pipeline_tag: conversational --- T3 stands for Terribly Tiny Transformers that are an efficient way of creating tiny distilled (student) models for hallucination-free LLM models in parameter-constrained environment (edge devices). The base model is a T3 adaptation of T5 model. The paradigm of T3 models can be extended to all types of models ( encoder only, decoder only & seq2seq)
joserodr68/dqn-SpaceInvadersNoFrameskip-v4
joserodr68
2023-07-03T11:34:25Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T11:33:53Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 380.50 +/- 203.24 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga joserodr68 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga joserodr68 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga joserodr68 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
AMUseBot/roberta-base-cookdial-v1_1
AMUseBot
2023-07-03T11:31:15Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2023-06-17T09:35:53Z
--- language: - en library_name: transformers tags: - text-classification widget: - text: "What ingredients do I need?" --- - Baseline NLU model for the "AMUseBot" cooking taskbot prototype. Updated version with more robust req_ingredient intent recognition thanks to finetuning with extra synthetic data. - ``roberta-base`` model finetuned with default hyperparameters for 7 epochs on intents from the CookDial (https://github.com/YiweiJiang2015/CookDial) dataset with an extra choose_recipe intent added. The ``simpletransformers`` library was used for fine-tuning. - Intent mapping: {"0": "affirm", "1": "choose_recipe", "2": "confirm", "3": "goodbye", "4": "greeting", "5": "negate", "6": "other", "7": "req_amount", "8": "req_duration", "9": "req_ingredient", "10": "req_ingredient_list", "11": "req_ingredient_list_ends", "12": "req_ingredient_list_length", "13": "req_instruction", "14": "req_is_recipe_finished", "15": "req_is_recipe_ongoing", "16": "req_parallel_action", "17": "req_repeat", "18": "req_start", "19": "req_substitute", "20": "req_temperature", "21": "req_title", "22": "req_tool", "23": "req_use_all", "24": "thank"}.
Shularp/TestHelsinkimulEnJpTh
Shularp
2023-07-03T11:25:45Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-03T10:03:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: TestHelsinkimulEnJpTh results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TestHelsinkimulEnJpTh This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7413 | 1.0 | 4846 | 1.0482 | | 0.8773 | 2.0 | 9692 | 0.8856 | | 0.8138 | 3.0 | 14538 | 0.7699 | | 0.531 | 4.0 | 19384 | 0.7021 | | 0.2739 | 5.0 | 24230 | 0.6825 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
zijun/autotrain-input_list-71788138727
zijun
2023-07-03T11:19:37Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain", "unk", "dataset:zijun/autotrain-data-input_list", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-03T11:19:08Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain" datasets: - zijun/autotrain-data-input_list co2_eq_emissions: emissions: 0.20160817247860105 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 71788138727 - CO2 Emissions (in grams): 0.2016 ## Validation Metrics - Loss: 0.261 - Accuracy: 0.882 - Precision: 0.926 - Recall: 0.926 - AUC: 0.931 - F1: 0.926 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zijun/autotrain-input_list-71788138727 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
GretaClementi96/blip2-opt-2.7b-inbreast-flip-captions-adapters
GretaClementi96
2023-07-03T11:13:25Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-02T14:09:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
tomar840/falcon-7b-math-reasoning
tomar840
2023-07-03T11:09:24Z
0
0
null
[ "arxiv:2106.09685", "arxiv:2305.14314", "region:us" ]
null
2023-07-03T11:04:01Z
datasets: - OpenAssistant/oasst1 pipeline_tag: text-generation license: apache-2.0 --- # 🚀 Falcon-7b-chat-oasst1 Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [openai/prm800k](https://github.com/openai/prm800k) dataset. This repo only includes the LoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package. ## Model Summary - **Model Type:** Causal decoder-only - **Language(s):** English - **Base Model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) (License: [Apache 2.0](https://huggingface.co/tiiuae/falcon-7b#license)) - **Dataset:** [openai/prm800k](https://github.com/openai/prm800k) - **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset" ## Model Details The model was fine-tuned in 8-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. ### Model Date July 3, 2023 ## Quick Start To prompt the chat model, use the following format: ``` <human>: [Instruction] <bot>: ``` ### Example Dialogue 1 **Prompter**: ``` """<human>: A positive integer $X$ is 2 more than a multiple of 3. Its units digit is the same as the units digit of a number that is 4 more than a multiple of 5. What is the smallest possible value of $X$? <bot>:""" ``` **Falcon-7b-chat-oasst1**: ``` I need to find a positive integer $X$ that satisfies two conditions: $X$ is 2 more than a multiple of 3, and its units digit is the same as the units digit of a number that is 4 more than a multiple of 5. I can start by listing some multiples of 3 and their units digits: 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60, 63, 66, 69, 72, 75, 78, 81, 84, 87, 90, 93, 96, 99. I can also list some multiples of 5 and their units digits: 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100. I notice that the units digit of a multiple of 3 is always 3, and the units digit of a multiple of 5 is always 5. This means that the units digit of $X$ must be 3, and the units digit of $X$ must be 5. The only multiple of 3 that is 2 more than a multiple of 3 is 6, and the only multiple of 5 that is 4 more than a multiple of 5 is 19. Therefore, the smallest possible value of $X$ is 6. # Answer 6 ```
mcamara/taxi-v3
mcamara
2023-07-03T11:04:56Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T10:47:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mcamara/taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lx865712528/master-base-pretrained-msmarco
lx865712528
2023-07-03T11:04:17Z
107
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "feature-extraction", "en", "dataset:ms_marco", "arxiv:2212.07841", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2023-07-03T10:19:24Z
--- license: mit datasets: - ms_marco language: - en pipeline_tag: feature-extraction --- # MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers Paper: [https://arxiv.org/abs/2212.07841](https://arxiv.org/abs/2212.07841). Code: [https://github.com/microsoft/SimXNS/tree/main/MASTER](https://github.com/microsoft/SimXNS/tree/main/MASTER). ## Overview This is the checkpoint after pretraining on the MS-MARCO corpus. **You may use this checkpoint as the initialization for finetuning.** ## Useage To load this checkpoint for initialization, you may follow: ```python from transformers import AutoModel model = AutoModel.from_pretrained('lx865712528/master-base-pretrained-msmarco') ```
AhmedTaha012/gptneo-TxtToJson-v0.1.18
AhmedTaha012
2023-07-03T10:59:40Z
116
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-03T09:38:10Z
--- license: mit tags: - generated_from_trainer model-index: - name: gptneo-TxtToJson-v0.1.18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptneo-TxtToJson-v0.1.18 This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0001 | 1.0 | 219 | 0.9411 | | 0.7298 | 2.0 | 438 | 0.7063 | | 0.4617 | 3.0 | 657 | 0.4989 | | 0.3366 | 4.0 | 876 | 0.3333 | | 0.2295 | 5.0 | 1095 | 0.2569 | | 0.1593 | 6.0 | 1314 | 0.2146 | | 0.1611 | 7.0 | 1533 | 0.1986 | | 0.0983 | 8.0 | 1752 | 0.1866 | | 0.0933 | 9.0 | 1971 | 0.1776 | | 0.0794 | 10.0 | 2190 | 0.1742 | | 0.0743 | 11.0 | 2409 | 0.1734 | | 0.0459 | 12.0 | 2628 | 0.1690 | | 0.0386 | 13.0 | 2847 | 0.1681 | | 0.0261 | 14.0 | 3066 | 0.1673 | | 0.0209 | 15.0 | 3285 | 0.1669 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
boleklolek/olka
boleklolek
2023-07-03T10:42:40Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T10:37:51Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### olka Dreambooth model trained by boleklolek with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
fidhal-ad/liar-deberta-v3-base_first_attempt
fidhal-ad
2023-07-03T10:37:21Z
104
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-15T15:43:51Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: liar-deberta-v3-base_first_attempt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # liar-deberta-v3-base_first_attempt This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6722 - Accuracy: 0.2705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7307 | 1.0 | 642 | 1.6722 | 0.2705 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.0 - Tokenizers 0.13.3
MOHIT8082/my-pet-dog-sbs
MOHIT8082
2023-07-03T10:12:22Z
10
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T10:07:31Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-SBS Dreambooth model trained by MOHIT8082 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: CVRGU236 Sample pictures of this concept: ![0](https://huggingface.co/MOHIT8082/my-pet-dog-sbs/resolve/main/sample_images/sbs_(4).jpg)
ZidanSink/Kayessss
ZidanSink
2023-07-03T10:11:35Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-03T10:09:49Z
--- license: creativeml-openrail-m ---
pavanpankaj/incre-train-addlayers
pavanpankaj
2023-07-03T10:11:17Z
0
0
peft
[ "peft", "pytorch", "RefinedWebModel", "custom_code", "region:us" ]
null
2023-07-03T09:54:35Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
ecwk/distilbert-git-commits-bugfix-classification
ecwk
2023-07-03T10:09:49Z
103
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-03T10:08:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: distilbert-git-commits-bugfix-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-git-commits-bugfix-classification This model is a fine-tuned version of [neuralsentry/distilbert-git-commits-mlm](https://huggingface.co/neuralsentry/distilbert-git-commits-mlm) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5037 - Accuracy: 0.9231 - Precision: 0.85 - Recall: 1.0 - F1: 0.9189 - Roc Auc: 0.9318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 420 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.6837 | 1.0 | 22 | 0.6040 | 0.5897 | 0.5161 | 0.9412 | 0.6667 | 0.6297 | | 0.3852 | 2.0 | 44 | 0.2881 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | | 0.2148 | 3.0 | 66 | 0.3807 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | | 0.0701 | 4.0 | 88 | 0.4934 | 0.8718 | 0.7727 | 1.0 | 0.8718 | 0.8864 | | 0.0164 | 5.0 | 110 | 0.4892 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 | | 0.0039 | 6.0 | 132 | 0.4929 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 | | 0.0012 | 7.0 | 154 | 0.4065 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | | 0.0008 | 8.0 | 176 | 0.4837 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | | 0.0007 | 9.0 | 198 | 0.5000 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | | 0.0006 | 10.0 | 220 | 0.5037 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sarthak101/my-pet-dog
sarthak101
2023-07-03T10:03:13Z
0
0
null
[ "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-03T09:56:02Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by sarthak101 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: CVRGU313 Sample pictures of this concept: ![0](https://huggingface.co/sarthak101/my-pet-dog/resolve/main/sample_images/xzg_(2).jpeg) ![1](https://huggingface.co/sarthak101/my-pet-dog/resolve/main/sample_images/xzg_(3).jpeg) ![2](https://huggingface.co/sarthak101/my-pet-dog/resolve/main/sample_images/xzg_(4).jpeg) ![3](https://huggingface.co/sarthak101/my-pet-dog/resolve/main/sample_images/xzg_(1).jpeg)
Abhishek1609/my-pet-dog-sbs
Abhishek1609
2023-07-03T09:57:16Z
6
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T09:52:44Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-SBS Dreambooth model trained by Abhishek1609 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: CVRGU243 Sample pictures of this concept: ![0](https://huggingface.co/Abhishek1609/my-pet-dog-sbs/resolve/main/sample_images/sbs_(1).jpg)
KJan05/KJan-Taxi-v3
KJan05
2023-07-03T09:55:36Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T09:55:33Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: KJan-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="KJan05/KJan-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```