modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 06:26:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 06:26:41
card
stringlengths
11
1.01M
aronmal/Reinforce-CartpoleMLP
aronmal
2023-07-06T07:53:32Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:53:23Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-MLP results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 464.00 +/- 91.98 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
xian79/Reinforce-CartPole-v1
xian79
2023-07-06T07:51:38Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:51:27Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Technotech/RedPajama-Base-3B-4bit-128g
Technotech
2023-07-06T07:49:49Z
5
0
transformers
[ "transformers", "gpt_neox", "text-generation", "gptq", "en", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-12T09:18:42Z
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T tags: - gptq --- ## RedPajama-Base-3B-4bit-128g RedPajama 3B, quantised to 4bit with groupsize of 128, no act order. # Original Model Card # RedPajama-INCITE-Base-3B-v1 RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program. - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. # Quick Start Please note that the model requires `transformers` version >= 4.25.1. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today. Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the... """ ``` ## GPU Inference in Int8 To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command: ```bash pip install accelerate pip install bitsandbytes ``` Then you can run inference with int8 as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age.... """ ``` ## CPU Inference You can run inference on CPU as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence... """ ``` Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference. # Uses Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner. #### Out-of-Scope Use `RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope. For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society. It is important to consider the limitations of the model and to only use it for its intended purpose. #### Misuse and Malicious Use `RedPajama-INCITE-Base-3B-v1` is designed for language modeling. Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating fake news, misinformation, or propaganda - Promoting hate speech, discrimination, or violence against individuals or groups - Impersonating individuals or organizations without their consent - Engaging in cyberbullying or harassment - Defamatory content - Spamming or scamming - Sharing confidential or sensitive information without proper authorization - Violating the terms of use of the model or the data used to train it - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming ## Limitations `RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration. For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data. We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot. ## Training **Training Data** Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) **Training Procedure** - **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster - **Optimizer:** Apex FusedAdam - **Parallelism:** Pipeline parallel 6, tensor parallel 2 - **Gradient Accumulations**: 8 (global batch size 4M tokens) - **Num of Tokens:** 800B Tokens - **Learning rate:** 0.00016 ## Benchmark Please refer to our [blog post](https://together.xyz) for benchmark results. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
atrytone/MIReAD-Neuro-Contrastive
atrytone
2023-07-06T07:40:38Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-06T07:38:47Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 480 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 100, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Vtmpas/ppo-LunarLander-v2
Vtmpas
2023-07-06T07:36:16Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:35:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.43 +/- 16.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Abinaya/opt-1.3b-lora-summary
Abinaya
2023-07-06T07:35:05Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-06T06:35:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0 ``` import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "Abinaya/opt-1.3-b-lora" config = PeftConfig.from_pretrained("Abinaya/opt-1.3b-lora-summary") model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b") model = PeftModel.from_pretrained(model, "Abinaya/opt-1.3b-lora-summary") tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) ``` ## For inference to get summary ``` batch = tokenizer("Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data", return_tensors='pt') with torch.cuda.amp.autocast(): output_tokens = model.generate(**batch, max_new_tokens=50) print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True)) ```
Word2vec/nlpl_222
Word2vec
2023-07-06T07:31:04Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_November_2021", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:01:35Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_November_2021 --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 199807 corresponding to 2717675616 tokens from the dataset `English_Wikipedia_Dump_of_November_2021`. The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_222", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/222.zip
Word2vec/nlpl_220
Word2vec
2023-07-06T07:30:44Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_National_Corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:01:16Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_National_Corpus --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249333 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 10 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_220", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/220.zip
NTQAI/pedestrian_gender_recognition
NTQAI
2023-07-06T07:29:58Z
45,879
15
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "beit", "image-classification", "vision", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-06T04:37:51Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer metrics: - accuracy model-index: - name: outputs results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9107332624867163 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the [PETA dataset](http://mmlab.ie.cuhk.edu.hk/projects/PETA_files/Pedestrian%20Attribute%20Recognition%20At%20Far%20Distance.pdf) dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.9107 ## Model description More information needed #### How to use You can use this model with Transformers *pipeline* . ```python from transformers import pipeline gender_classifier = pipeline(model="NTQAI/pedestrian_gender_recognition") image_path = "abc.jpg" results = gender_classifier(image_path) print(results) ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5193 | 1.0 | 2000 | 0.3346 | 0.8533 | | 0.337 | 2.0 | 4000 | 0.2892 | 0.8778 | | 0.3771 | 3.0 | 6000 | 0.2493 | 0.8969 | | 0.3819 | 4.0 | 8000 | 0.2275 | 0.9100 | | 0.3581 | 5.0 | 10000 | 0.2170 | 0.9107 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1 ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
Word2vec/nlpl_206
Word2vec
2023-07-06T07:29:52Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:09:12Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip
Word2vec/nlpl_205
Word2vec
2023-07-06T07:29:34Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:04:52Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith fastText Continuous Bag-of-Words with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_205", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/205.zip
afaan00733/refference_filtering
afaan00733
2023-07-06T07:28:03Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T07:15:25Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: refference_filtering results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # refference_filtering This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3518 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.6560 | 0.8947 | | No log | 2.0 | 4 | 0.6103 | 1.0 | | No log | 3.0 | 6 | 0.5545 | 1.0 | | No log | 4.0 | 8 | 0.4951 | 0.9474 | | No log | 5.0 | 10 | 0.4457 | 1.0 | | No log | 6.0 | 12 | 0.4127 | 1.0 | | No log | 7.0 | 14 | 0.3894 | 1.0 | | No log | 8.0 | 16 | 0.3705 | 1.0 | | No log | 9.0 | 18 | 0.3577 | 1.0 | | No log | 10.0 | 20 | 0.3518 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
Word2vec/nlpl_184
Word2vec
2023-07-06T07:28:01Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_News", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:55:10Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_News --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249318 corresponding to 2550000000 tokens from the dataset `Russian_News`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_184", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/184.zip
Word2vec/nlpl_180
Word2vec
2023-07-06T07:27:01Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_National_Corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:54:19Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_National_Corpus --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 189193 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 20 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_180", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/180.zip
digiplay/Zevinemix_v1.0
digiplay
2023-07-06T07:24:33Z
255
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T04:38:41Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/103015?modelVersionId=110251 Sample image I made : ![46105ee5-0d15-4fef-869c-8001b8c3bd68.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/AdPItkBFc4Ot3nsb0zm21.jpeg) ![5ab99e32-e1c8-4e05-a8b6-7c53a2b4b521.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/wrn3QMiDqZvxUz1UTc2O1.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/66ac4643-739f-45a4-a7be-1d9f4ce568df/00020-2280478265.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/3bd9f933-ec26-4082-9c9a-3b24fb4a668f/00021-1004882248.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b96bf700-6858-45e7-9bdb-29514dcac6c3/00024-2424101811.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d813c897-c852-4d9a-93db-e5870cf1abfc/00037-2057319243.jpeg)
Bugsys0302/m416
Bugsys0302
2023-07-06T07:16:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T07:06:10Z
--- license: creativeml-openrail-m ---
youyougu/test-01
youyougu
2023-07-06T07:06:18Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T06:53:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model-index: - name: test-01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-01 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Bugsys0302/beltbr
Bugsys0302
2023-07-06T06:59:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T06:57:43Z
--- license: creativeml-openrail-m ---
afaan00733/my_awesome_model
afaan00733
2023-07-06T06:56:30Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T21:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6546 - Accuracy: 0.4737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.6732 | 0.4737 | | No log | 2.0 | 4 | 0.6546 | 0.4737 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
JennnDexter/pokemon-lora
JennnDexter
2023-07-06T06:44:42Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-12T06:24:16Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - JennnDexter/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
NasimB/gpt2-concat-aochildes-16plus6k
NasimB
2023-07-06T06:39:38Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T04:47:18Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-aochildes-16plus6k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-aochildes-16plus6k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7265 | 0.3 | 500 | 5.6481 | | 5.3801 | 0.59 | 1000 | 5.2065 | | 5.0346 | 0.89 | 1500 | 4.9518 | | 4.7589 | 1.19 | 2000 | 4.8123 | | 4.6003 | 1.48 | 2500 | 4.6915 | | 4.4941 | 1.78 | 3000 | 4.5806 | | 4.3447 | 2.07 | 3500 | 4.5155 | | 4.1761 | 2.37 | 4000 | 4.4640 | | 4.1351 | 2.67 | 4500 | 4.4014 | | 4.1043 | 2.96 | 5000 | 4.3576 | | 3.8639 | 3.26 | 5500 | 4.3597 | | 3.8432 | 3.56 | 6000 | 4.3266 | | 3.8118 | 3.85 | 6500 | 4.2913 | | 3.6736 | 4.15 | 7000 | 4.2957 | | 3.5472 | 4.45 | 7500 | 4.2920 | | 3.5398 | 4.74 | 8000 | 4.2794 | | 3.507 | 5.04 | 8500 | 4.2806 | | 3.3499 | 5.33 | 9000 | 4.2855 | | 3.3504 | 5.63 | 9500 | 4.2851 | | 3.3498 | 5.93 | 10000 | 4.2849 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_random
aroot
2023-07-06T06:36:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T06:14:10Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8977 - Bleu: 4.1368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
cherrue/RandomCrop_Rescale_epoch_3_learning_rate_5e_5_decay_0_01
cherrue
2023-07-06T06:30:06Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T05:35:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cherrue/pricetag_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cherrue/pricetag_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0546 - Validation Loss: 1.2226 - Train Accuracy: 0.3846 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1251, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3379 | 1.2276 | 0.5128 | 0 | | 1.1973 | 1.1561 | 0.4615 | 1 | | 1.0546 | 1.2226 | 0.3846 | 2 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Waterhorse/chessgpt-chat-v1
Waterhorse
2023-07-06T06:20:40Z
124
10
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:Waterhorse/chess_data", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:OpenAssistant/oasst1", "dataset:vicgalle/alpaca-gpt4", "arxiv:2306.09200", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-03T21:18:08Z
--- license: apache-2.0 language: - en datasets: - Waterhorse/chess_data - anon8231489123/ShareGPT_Vicuna_unfiltered - OpenAssistant/oasst1 - vicgalle/alpaca-gpt4 --- # Chessgpt-Chat-v1 Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1. - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1) - Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1) Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk. ## Model Details - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model in Chess. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1") model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer # Conversation between two prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:" # Conversation between more than two #prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) ``` # Uses Excluded uses are described below. ### Direct Use `chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling. #### Out-of-Scope Use `chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain. #### Bias, Risks, and Limitations Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases. # Evaluation Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results. # Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
Waterhorse/chessgpt-base-v1
Waterhorse
2023-07-06T06:19:40Z
83
6
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:Waterhorse/chess_data", "arxiv:2306.09200", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-02T22:03:14Z
--- license: apache-2.0 language: - en datasets: - Waterhorse/chess_data --- # Chessgpt-Base-3B-v1 Chessgpt-Base-v1 is the base model of Chessgpt. - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1) - Chat Version: [chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1) Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk. ## Model Details - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model in Chess. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-base-v1") model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-base-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer # Conversation between two prompt = "Q: 1.e4 c5, what is the name of this opening?A:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) ``` # Uses Excluded uses are described below. ### Direct Use `chessgpt-base-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling. #### Out-of-Scope Use `chessgpt-base-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain. #### Bias, Risks, and Limitations Just as with any language model, chessgpt-base-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases. # Evaluation Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results. # Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
sukritiverma/thumbs-up-tom_cruise
sukritiverma
2023-07-06T06:14:17Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-05T23:31:34Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - sukritiverma/thumbs-up-tom_cruise These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
yuuhan/roberta-base-rte-lora
yuuhan
2023-07-06T06:12:21Z
6
0
peft
[ "peft", "text-classification", "en", "dataset:SetFit/rte", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T06:03:00Z
--- license: apache-2.0 datasets: - SetFit/rte language: - en metrics: - accuracy library_name: peft pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details Accuracy: 0.7328519855595668 on RTE
saintzeno/a2c-PandaReachDense-v3
saintzeno
2023-07-06T06:10:45Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T05:52:59Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.22 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LarryAIDraw/sakurako
LarryAIDraw
2023-07-06T06:00:57Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T05:27:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/100652/sakurako-busujima-grand-blue
Ryukijano/whisper-small-dv
Ryukijano
2023-07-06T05:36:17Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "dataset:mozilla-foundation/common_voice_13_0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T06:25:50Z
--- license: mit datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer --- --- # Whisper Small DV Model ![Model Banner](https://uploads-ssl.webflow.com/614c82ed388d53640613982e/63eb5ebedd3a9a738e22a03f_open%20ai%20whisper.jpg) ## Model Description The `whisper-small-dv` model is an advanced Automatic Speech Recognition (ASR) model, trained on the extensive [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/en/datasets) dataset. This model is capable of transcribing spoken language into written text with high accuracy, making it a valuable tool for a wide range of applications, from transcription services to voice assistants. ## Training The model was trained using the PyTorch framework and the Transformers library. Training metrics and visualizations can be viewed on TensorBoard. ## Performance The model's performance was evaluated on a held-out test set. The evaluation metrics and results can be found in the "Eval Results" section. ## Usage The model can be used for any ASR task. To use the model, you can load it using the Transformers library: ```python from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Load the model model = Wav2Vec2ForCTC.from_pretrained("Ryukijano/whisper-small-dv") processor = Wav2Vec2Processor.from_pretrained("Ryukijano/whisper-small-dv") # Use the model for ASR inputs = processor("path_to_audio_file", return_tensors="pt", padding=True) logits = model(inputs.input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) ``` ## License This model is released under the MIT license. --- P
eigenscribe/etzHayim
eigenscribe
2023-07-06T05:34:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T05:33:49Z
--- license: creativeml-openrail-m ---
aroot/eng-fra-simcse_central
aroot
2023-07-06T05:13:08Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T04:53:14Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1521 - Bleu: 31.5479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ashmitg/model_lora
ashmitg
2023-07-06T05:11:34Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-04T22:28:40Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
tuanio/WhisperCTC
tuanio
2023-07-06T05:06:09Z
0
1
null
[ "summarization", "dataset:mozilla-foundation/common_voice_13_0", "arxiv:1910.09700", "region:us" ]
summarization
2023-07-06T04:55:16Z
--- datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer pipeline_tag: summarization --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ```python class WhisperCTC(nn.Module): def __init__( self, encoder_id: str = "tuanio/whisper-encoder.tiny.en", dropout: float = 0.1, vocab_size: int = 47, ): super().__init__() self.encoder = WhisperEncoder.from_pretrained(encoder_id) print("Freezing Whisper Encoder...") self.encoder._freeze_parameters() print("Freezed!") self.lm_head = nn.Sequential( nn.SiLU(), nn.Dropout(dropout), nn.Linear(self.encoder.config.d_model, vocab_size), ) nn.init.kaiming_uniform_( self.lm_head[-1].weight, mode="fan_in", nonlinearity="relu" ) def forward(self, feat: Tensor, attn_mask: Tensor): enc = self.encoder( input_features=feat, attention_mask=attn_mask ).last_hidden_state logits = self.lm_head(enc) log_probs = nn.functional.log_softmax(logits, dim=-1) return log_probs ``` - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data - IndictTTS: https://www.kaggle.com/datasets/tuannguyenvananh/indictts-english [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters ```yaml data_cfg: dataset: processor: feat_extractor_id: ${model_cfg.model.encoder_id} tokenizer_id: ${model_cfg.tokenizer_id} path: base: indict_tts: ../IndicTTS cv: ../ train: - train_data/indict_tts_train.jsonl # - train_data/cv_train.jsonl test: - train_data/indict_tts_test.jsonl # - train_data/cv_test.jsonl dev: - train_data/indict_tts_dev.jsonl # - train_data/cv_dev.jsonl dataloader: batch_size: 46 num_workers: 8 pin_memory: True model_cfg: tokenizer_id: tuanio/wav2vec2-phoneme-ipa-ctc model: dropout: 0.1 encoder_id: tuanio/whisper-encoder.medium.en optim: lr: 1.25e-05 betas: [0.9, 0.998] weight_decay: 0.01 scheduler: name: linear total_steps: -1 warmup_ratio: 0.05 interval: step frequency: 1 trainer_cfg: log: wandb: True logger_wandb: project: aped_indian-lish name: whisper-medium-indict-tts-only-from-epoch1 log_model: all arguments: accelerator: gpu devices: -1 max_epochs: 10 log_every_n_steps: 1 enable_checkpointing: True accumulate_grad_batches: 2 inference_mode: True gradient_clip_val: 5.0 check_val_every_n_epoch: 1 val_check_interval: null experiment_cfg: train: True valid: True test: True ckpt: resume_ckpt: True ckpt_path: ckpt/medium.epoch3.ckpt ``` #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AAOBA/ppo-PyramidsRND
AAOBA
2023-07-06T05:05:37Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-06T05:04:49Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chikoto/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
whiteDandelion/swin-tiny-patch4-window7-224-finetuned-eurosat
whiteDandelion
2023-07-06T05:01:12Z
228
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T04:12:49Z
--- tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9805 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [andupets/real-estate-image-classification](https://huggingface.co/andupets/real-estate-image-classification) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0613 - Accuracy: 0.9805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.089 | 0.99 | 140 | 0.1050 | 0.9635 | | 0.0565 | 2.0 | 281 | 0.0760 | 0.9725 | | 0.0421 | 2.98 | 420 | 0.0613 | 0.9805 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45
squeeze-ai-lab
2023-07-06T04:46:32Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:46:53Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
mazeinmouse/a2c-AntBulletEnv-v0
mazeinmouse
2023-07-06T04:34:47Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T04:33:37Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1651.08 +/- 126.30 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
headflame02/AchaxV4
headflame02
2023-07-06T04:30:16Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T04:29:24Z
--- license: creativeml-openrail-m ---
NasimB/gpt2-concat-cbt-rarity-2k-p3k
NasimB
2023-07-06T04:28:43Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T02:13:04Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-2k-p3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-2k-p3k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.0083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7186 | 0.29 | 500 | 5.6281 | | 5.3685 | 0.58 | 1000 | 5.1947 | | 5.0278 | 0.87 | 1500 | 4.9465 | | 4.7459 | 1.17 | 2000 | 4.8014 | | 4.5838 | 1.46 | 2500 | 4.6757 | | 4.4777 | 1.75 | 3000 | 4.5664 | | 4.3633 | 2.04 | 3500 | 4.4935 | | 4.1601 | 2.33 | 4000 | 4.4512 | | 4.1388 | 2.62 | 4500 | 4.3967 | | 4.1004 | 2.91 | 5000 | 4.3434 | | 3.9085 | 3.21 | 5500 | 4.3385 | | 3.8559 | 3.5 | 6000 | 4.3100 | | 3.8409 | 3.79 | 6500 | 4.2772 | | 3.7507 | 4.08 | 7000 | 4.2758 | | 3.5677 | 4.37 | 7500 | 4.2717 | | 3.5771 | 4.66 | 8000 | 4.2566 | | 3.5653 | 4.95 | 8500 | 4.2354 | | 3.3565 | 5.24 | 9000 | 4.2632 | | 3.3184 | 5.54 | 9500 | 4.2598 | | 3.3222 | 5.83 | 10000 | 4.2510 | | 3.2596 | 6.12 | 10500 | 4.2621 | | 3.1718 | 6.41 | 11000 | 4.2643 | | 3.1656 | 6.7 | 11500 | 4.2647 | | 3.1666 | 6.99 | 12000 | 4.2645 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
aroot/eng-mya-wsample.43a
aroot
2023-07-06T04:28:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T04:06:12Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-wsample.43a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-wsample.43a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8306 - Bleu: 4.6779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
omnitron/PPO-Huggy
omnitron
2023-07-06T04:23:24Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T04:22:59Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: omnitron/PPO-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
aroot/eng-mya-wsample.32a
aroot
2023-07-06T04:23:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T04:01:01Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-wsample.32a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-wsample.32a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8284 - Bleu: 4.7194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ocisd4/openllama-zh-7B
ocisd4
2023-07-06T04:13:52Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T03:46:10Z
```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM import transformers tokenizer = LlamaTokenizer.from_pretrained( 'ocisd4/openllama-zh', add_bos_token=False, add_eos_token=False, use_auth_token=True, use_fast=False) model = LlamaForCausalLM.from_pretrained('ocisd4/openllama-zh', device_map='auto',use_auth_token=True) prompt = '關於華碩的傳說' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=256, do_sample=True, top_k=40, top_p=0.95, temperature=0.7, repetition_penalty=1.08, ) print(tokenizer.decode(generation_output[0])) ``` The is a 7B pretrain model, train from openllama pretrain weight, context size=2048 **keep updating new model**
lovelyxs/PPO-LunarLander-v2
lovelyxs
2023-07-06T04:11:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T03:54:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.53 +/- 16.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dangvansam/whisper-base-vi
dangvansam
2023-07-06T04:09:35Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "vi", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T10:42:24Z
--- language: - vi pipeline_tag: automatic-speech-recognition ---
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s45
squeeze-ai-lab
2023-07-06T03:58:19Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:47:10Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst). * **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research) * **Bitwidth:** 4-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-xgen-7b-8k-inst-w3-s45
squeeze-ai-lab
2023-07-06T03:56:32Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:47:03Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst). * **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
digiplay/CoffeeMix_v1
digiplay
2023-07-06T03:55:09Z
307
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T02:17:13Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/40630?modelVersionId=45847 Sample image I made : ![0235d726-e2c8-4923-bf03-c543f2ac4a60.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/C2Bd8j0hjY-9ml-Q1Od2y.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7e65781b-309a-4686-2b94-a73eae211600/00144-1649392094.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/708f60ef-9802-4543-cfa2-d3dd29722100/00164-3364070768.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/729df7f3-ae0c-4ca1-b6e4-59faf294a100/00140-3641118898.jpeg)
aroot/eng-guj-wsample.43a
aroot
2023-07-06T03:44:33Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T03:21:38Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-wsample.43a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-wsample.43a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2191 - Bleu: 2.9237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
zhundred/ppo-LunarLander-v2
zhundred
2023-07-06T03:38:13Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T03:37:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.86 +/- 20.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MWaleed/q-Taxi-v3
MWaleed
2023-07-06T03:23:27Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T03:23:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MWaleed/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
squeeze-ai-lab/sq-xgen-7b-8k-inst-w3-s0
squeeze-ai-lab
2023-07-06T03:15:42Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-05T23:32:13Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst). * **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s0
squeeze-ai-lab
2023-07-06T03:14:48Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-05T23:31:51Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s0
squeeze-ai-lab
2023-07-06T03:14:31Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-05T23:31:15Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
Bellaaazzzzz/models_fill
Bellaaazzzzz
2023-07-06T02:41:19Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-06T02:35:57Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Bellaaazzzzz/models_fill These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. Validation result of 1 round. ![images_0_0)](./images_0_0.png) Validation result of 2 round. ![images_1_0)](./images_1_0.png)
TanimHasan/LLaMA-NUBI-v2
TanimHasan
2023-07-06T02:02:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-06T02:02:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
roemmele/falcon-7b-loss-score
roemmele
2023-07-06T01:46:31Z
14
0
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T22:16:33Z
This is a fork of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b), with a custom endpoint handler (handler.py) that returns the model loss score of a given input text.
Huggingfly/Reinforce-Cartpole-v1
Huggingfly
2023-07-06T01:38:51Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T01:38:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jsjung00/ppo-LunarLander-v2
jsjung00
2023-07-06T01:20:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T01:20:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -636.93 +/- 286.95 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
YIMMYCRUZ/vit-model-ojas
YIMMYCRUZ
2023-07-06T01:14:59Z
72
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "image-segmentation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-segmentation
2023-07-05T03:17:25Z
--- license: apache-2.0 tags: - image-segmentation - generated_from_trainer metrics: - accuracy widget: - src: https://i.ibb.co/NL52HmG/sana.png example_title: Healthy - src: https://i.ibb.co/P44CL1q/marchita.png example_title: Bean Rust model-index: - name: vit-model-ojas results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-model-ojas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0099 - Accuracy: 1.0 ## Model description You can manage to segment the images of plant leaves to be able to know if they are healthy or withered. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1457 | 3.85 | 500 | 0.0099 | 1.0 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
anujsahani01/finetuned_Mbart_mr_en
anujsahani01
2023-07-06T01:08:06Z
120
0
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T17:34:56Z
--- license: mit tags: - generated_from_trainer model-index: - name: finetuned_Mbart_mr_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_Mbart_mr_en This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 10000 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
dmatekenya/whisper-small_finetuned_sw_chich
dmatekenya
2023-07-06T00:54:56Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T20:02:12Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small_finetuned_sw_chich results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small_finetuned_sw_chich This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7430 - Wer: 80.1992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0324 | 4.39 | 500 | 1.5624 | 84.6754 | | 0.0151 | 8.77 | 1000 | 1.6639 | 82.4073 | | 0.0099 | 13.16 | 1500 | 1.7377 | 78.8912 | | 0.0081 | 17.54 | 2000 | 1.7430 | 80.1992 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
chaudha7/LLMs
chaudha7
2023-07-06T00:51:44Z
0
0
null
[ "region:us" ]
null
2023-05-17T21:15:35Z
### Model Description This is a fine-tuned Bloom-7b model. It has been trained on a dummy dataset for question answering purposes. It is not very useful for the general public. I wanted to get an idea of the hugging face model and dataset pipeline. Do check out https://huggingface.co/chaudha7/DiaryFlow - **Developed by:** Aashay Chaudhari
chaudha7/DiaryFlow
chaudha7
2023-07-06T00:49:49Z
0
0
null
[ "region:us" ]
null
2023-07-06T00:43:32Z
### Model Description This is a fine-tuned Bloom-7b model. It was a demo project which I wanted to try to alleviate the seriousness and rapid pace around the "LLM" usecases. This model has been trained on a custom chatGPT-created dataset (https://huggingface.co/datasets/chaudha7/Diary-Entry-To-Rap) which turns diary entries into short rap songs with a motivational ending. Just something fun for the users. ### Developed by: Aashay Chaudhari ## Training procedure The following `bits and bytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions ### Results ### Input: Dear Diary, Oh boy, today was such an incredible day! I woke up with a burst of excitement in my heart because it was my birthday! Double digits, baby! Mom and Dad surprised me with a brand new bike, just like the one I've been dreaming about for ages. We went for a ride in the park, and I felt like I was flying! My friends came over for a super fun party, and we played games, danced, and laughed until our tummies hurt. Grandma baked the most delicious chocolate cake, and I blew out all the candles with one big breath. The best part? I got some awesome presents, like a shiny diary (just like you, Diary!), art supplies, and a book about adventures. I can't wait to read it tonight! Tonight, as I snuggle under my cozy blanket, I can't help but feel so grateful for the love and joy that surrounds me. Being 10 feels amazing, Diary. I can't wait to see what other incredible adventures await me this year! Until tomorrow, Emily ### Output: (Verse) Today's my birthday, feeling so alive, Double digits, yo, I'm ready to strive. Got a brand-new bike, flying through the air, Riding in the park, without a single care. Friends came over, we had a blast, Playing games, dancing, memories that'll last. Grandma's chocolate cake, candles all aglow, Blowing them out, a wish ready to grow. Shiny diary, art supplies, and books, Unleashing my creativity, no limits, no hooks. Grateful for love and joy, can't help but smile, Ten years old, embracing life's every mile. (Motivational Ending) No matter the age, dreams light up the way, Believe in yourself, make the most of today. Adventures await, with each step you take, Keep shining, keep growing, there's no limit to what you'll make.
CalamitousVisibility/UK_Energy_Industry_Complaints_Identifier_ver1
CalamitousVisibility
2023-07-06T00:28:38Z
109
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-05T22:24:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: UK_Energy_Industry_Complaints_Identifier_ver1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # UK_Energy_Industry_Complaints_Identifier_ver1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a balanced dataset consisting of 17,620 publicy available customer reviews of various domestic energy suppliers in the United Kingdom. It achieves the following results on the evaluation set: - Loss: 0.3369 - Accuracy: 0.9561 - F1: [0.95594347 0.95621041] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.11.0
momomomomomo/Rotten_Tomato_Classfier
momomomomomo
2023-07-05T23:58:12Z
63
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-05T21:06:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: momomomomomo/Rotten_Tomato_Classfier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # momomomomomo/Rotten_Tomato_Classfier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5703 - Validation Loss: 0.6171 - Train Accuracy: 0.7131 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 189675, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6738 | 0.6373 | 0.7018 | 0 | | 0.5703 | 0.6171 | 0.7131 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
gagan3012/Qalam_onnx
gagan3012
2023-07-05T23:44:16Z
10
0
transformers.js
[ "transformers.js", "onnx", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "ar", "license:apache-2.0", "region:us" ]
image-to-text
2023-07-05T22:41:49Z
--- license: apache-2.0 language: - ar metrics: - wer library_name: transformers.js pipeline_tag: image-to-text ---
eluzhnica/mpt-7b-instruct-peft-compatible
eluzhnica
2023-07-05T23:35:23Z
18
0
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "dataset:mosaicml/dolly_hhrlhf", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2010.04245", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-05T23:14:18Z
--- license: cc-by-sa-3.0 datasets: - mosaicml/dolly_hhrlhf tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-7B-Instruct This is the MPT-7B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-7B-Instruct. I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable. Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main https://huggingface.co/mosaicml/mpt-7b/discussions/42. The original description for MosaicML team below: MPT-7B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date May 5, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Question/Instruction **Longboi24**: > What is a quoll? **MPT-7B-Instruct**: >A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-instruct', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ### Formatting This model was trained on data formatted in the dolly-15k format: ```python INSTRUCTION_KEY = "### Instruction:" RESPONSE_KEY = "### Response:" INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request." PROMPT_FOR_GENERATION_FORMAT = """{intro} {instruction_key} {instruction} {response_key} """.format( intro=INTRO_BLURB, instruction_key=INSTRUCTION_KEY, instruction="{instruction}", response_key=RESPONSE_KEY, ) example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering." fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## PreTraining Data For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ### Training Configuration This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
asenella/mmnist_MoPoEconfig_resnet_seed_0_ratio_0_c
asenella
2023-07-05T23:16:00Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-06-04T21:11:40Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
hopkins/eng-mya-simcse.near2.4440
hopkins
2023-07-05T22:49:46Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T22:28:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse.near2.4440 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse.near2.4440 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8502 - Bleu: 4.8797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/eng-mya-simcse.dev2.4440
hopkins
2023-07-05T22:46:19Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T22:24:42Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse.dev2.4440 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse.dev2.4440 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8287 - Bleu: 4.8012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
DawidL/ppo-LunarLander-v2
DawidL
2023-07-05T22:15:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T22:15:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 251.01 +/- 17.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_g05
jordyvl
2023-07-05T22:13:02Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-05T20:03:36Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_g05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_g05 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1631 - Accuracy: 0.72 - Exit 0 Accuracy: 0.1125 - Exit 1 Accuracy: 0.155 - Exit 2 Accuracy: 0.3325 - Exit 3 Accuracy: 0.3225 - Exit 4 Accuracy: 0.105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7600 | 0.1075 | 0.075 | 0.0675 | 0.0925 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7312 | 0.1125 | 0.07 | 0.065 | 0.12 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6924 | 0.1325 | 0.075 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6597 | 0.1675 | 0.0775 | 0.055 | 0.125 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6138 | 0.2025 | 0.0825 | 0.0575 | 0.12 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5640 | 0.215 | 0.0875 | 0.08 | 0.11 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5403 | 0.22 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5207 | 0.2275 | 0.09 | 0.0925 | 0.12 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.4860 | 0.27 | 0.0975 | 0.0975 | 0.115 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4397 | 0.295 | 0.1 | 0.1075 | 0.13 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.4044 | 0.3 | 0.095 | 0.12 | 0.1475 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3671 | 0.3075 | 0.1025 | 0.1175 | 0.1475 | 0.065 | 0.0625 | | No log | 12.72 | 26 | 2.3178 | 0.3175 | 0.105 | 0.115 | 0.145 | 0.0775 | 0.0625 | | No log | 13.72 | 28 | 2.2514 | 0.355 | 0.105 | 0.1225 | 0.155 | 0.11 | 0.0625 | | No log | 14.72 | 30 | 2.2030 | 0.3775 | 0.1125 | 0.125 | 0.195 | 0.115 | 0.065 | | No log | 15.72 | 32 | 2.1831 | 0.3725 | 0.1075 | 0.13 | 0.225 | 0.1075 | 0.065 | | No log | 16.72 | 34 | 2.1417 | 0.3675 | 0.115 | 0.1375 | 0.2375 | 0.1075 | 0.065 | | No log | 17.72 | 36 | 2.0688 | 0.3975 | 0.1075 | 0.1375 | 0.255 | 0.115 | 0.07 | | No log | 18.72 | 38 | 2.0252 | 0.4075 | 0.115 | 0.14 | 0.26 | 0.1225 | 0.0825 | | No log | 19.72 | 40 | 1.9896 | 0.4275 | 0.115 | 0.14 | 0.265 | 0.125 | 0.0925 | | No log | 20.72 | 42 | 1.9344 | 0.4675 | 0.11 | 0.14 | 0.2675 | 0.11 | 0.095 | | No log | 21.72 | 44 | 1.8826 | 0.48 | 0.11 | 0.1375 | 0.2625 | 0.1175 | 0.095 | | No log | 22.72 | 46 | 1.8459 | 0.505 | 0.11 | 0.1375 | 0.2525 | 0.1125 | 0.095 | | No log | 23.72 | 48 | 1.8152 | 0.5375 | 0.11 | 0.14 | 0.275 | 0.12 | 0.0975 | | No log | 24.72 | 50 | 1.7909 | 0.535 | 0.11 | 0.1425 | 0.2975 | 0.135 | 0.1025 | | No log | 25.72 | 52 | 1.7339 | 0.5575 | 0.1075 | 0.145 | 0.3 | 0.13 | 0.0975 | | No log | 26.72 | 54 | 1.6912 | 0.56 | 0.1125 | 0.145 | 0.295 | 0.14 | 0.1025 | | No log | 27.72 | 56 | 1.6601 | 0.575 | 0.115 | 0.1475 | 0.3025 | 0.1425 | 0.1025 | | No log | 28.72 | 58 | 1.6302 | 0.585 | 0.115 | 0.1475 | 0.295 | 0.145 | 0.1 | | No log | 29.72 | 60 | 1.5808 | 0.585 | 0.1125 | 0.1475 | 0.3 | 0.155 | 0.1025 | | No log | 30.72 | 62 | 1.5408 | 0.6 | 0.115 | 0.1475 | 0.3025 | 0.175 | 0.1 | | No log | 31.72 | 64 | 1.5289 | 0.605 | 0.115 | 0.145 | 0.3 | 0.18 | 0.0975 | | No log | 32.72 | 66 | 1.5030 | 0.6125 | 0.115 | 0.145 | 0.2975 | 0.18 | 0.1 | | No log | 33.72 | 68 | 1.4653 | 0.635 | 0.115 | 0.145 | 0.3 | 0.185 | 0.1 | | No log | 34.72 | 70 | 1.4342 | 0.6325 | 0.1175 | 0.145 | 0.295 | 0.21 | 0.0975 | | No log | 35.72 | 72 | 1.4088 | 0.64 | 0.115 | 0.1475 | 0.2975 | 0.2175 | 0.095 | | No log | 36.72 | 74 | 1.3848 | 0.6375 | 0.1175 | 0.1475 | 0.3075 | 0.2175 | 0.095 | | No log | 37.72 | 76 | 1.3533 | 0.6775 | 0.12 | 0.1475 | 0.315 | 0.2475 | 0.095 | | No log | 38.72 | 78 | 1.3349 | 0.68 | 0.1175 | 0.1475 | 0.3125 | 0.2525 | 0.095 | | No log | 39.72 | 80 | 1.3140 | 0.665 | 0.115 | 0.1475 | 0.325 | 0.255 | 0.0975 | | No log | 40.72 | 82 | 1.3001 | 0.6825 | 0.115 | 0.1475 | 0.325 | 0.265 | 0.0975 | | No log | 41.72 | 84 | 1.2824 | 0.695 | 0.115 | 0.1475 | 0.32 | 0.2625 | 0.1 | | No log | 42.72 | 86 | 1.2740 | 0.7 | 0.115 | 0.1525 | 0.3275 | 0.265 | 0.1 | | No log | 43.72 | 88 | 1.2538 | 0.7 | 0.115 | 0.1525 | 0.33 | 0.2675 | 0.1 | | No log | 44.72 | 90 | 1.2348 | 0.6925 | 0.1125 | 0.1525 | 0.33 | 0.29 | 0.1025 | | No log | 45.72 | 92 | 1.2253 | 0.705 | 0.1125 | 0.1525 | 0.3325 | 0.29 | 0.105 | | No log | 46.72 | 94 | 1.2225 | 0.7025 | 0.1125 | 0.1525 | 0.335 | 0.2925 | 0.105 | | No log | 47.72 | 96 | 1.2153 | 0.7075 | 0.1125 | 0.1525 | 0.3375 | 0.295 | 0.105 | | No log | 48.72 | 98 | 1.1988 | 0.725 | 0.1125 | 0.1525 | 0.3325 | 0.3025 | 0.105 | | No log | 49.72 | 100 | 1.1897 | 0.725 | 0.1125 | 0.1525 | 0.3325 | 0.31 | 0.105 | | No log | 50.72 | 102 | 1.1835 | 0.7225 | 0.1125 | 0.1525 | 0.33 | 0.315 | 0.1025 | | No log | 51.72 | 104 | 1.1834 | 0.72 | 0.1125 | 0.1525 | 0.335 | 0.3175 | 0.1025 | | No log | 52.72 | 106 | 1.1767 | 0.7275 | 0.1125 | 0.1525 | 0.335 | 0.305 | 0.105 | | No log | 53.72 | 108 | 1.1726 | 0.7225 | 0.1125 | 0.1525 | 0.335 | 0.31 | 0.105 | | No log | 54.72 | 110 | 1.1696 | 0.7175 | 0.1125 | 0.1525 | 0.335 | 0.31 | 0.105 | | No log | 55.72 | 112 | 1.1673 | 0.7125 | 0.1125 | 0.155 | 0.3325 | 0.3125 | 0.105 | | No log | 56.72 | 114 | 1.1653 | 0.7175 | 0.1125 | 0.155 | 0.3325 | 0.32 | 0.105 | | No log | 57.72 | 116 | 1.1638 | 0.72 | 0.1125 | 0.155 | 0.33 | 0.325 | 0.105 | | No log | 58.72 | 118 | 1.1633 | 0.72 | 0.1125 | 0.155 | 0.33 | 0.3225 | 0.105 | | No log | 59.72 | 120 | 1.1631 | 0.72 | 0.1125 | 0.155 | 0.3325 | 0.3225 | 0.105 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
TheSupremeTaco/Taxi-v3
TheSupremeTaco
2023-07-05T22:11:34Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T22:11:31Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="TheSupremeTaco/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LiviaQi/trained_model
LiviaQi
2023-07-05T22:10:22Z
188
0
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-07-05T21:06:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: trained_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
asenella/mmnist_MMVAEPlusconfig_resnet_seed_0_ratio_0_c
asenella
2023-07-05T22:07:37Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-05T22:07:20Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
josero23/irrut
josero23
2023-07-05T21:55:44Z
1
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-05T21:42:44Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### irrut Dreambooth model trained by josero23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
newconew/speecht5_finetuned_voxpopuli_nl
newconew
2023-07-05T21:55:25Z
80
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-05T19:33:24Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5194 | 4.3 | 1000 | 0.4806 | | 0.494 | 8.61 | 2000 | 0.4670 | | 0.4929 | 12.91 | 3000 | 0.4642 | | 0.4914 | 17.21 | 4000 | 0.4612 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hopkins/eng-fra-simcse.near2.4440
hopkins
2023-07-05T21:32:35Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T21:12:42Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse.near2.4440 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse.near2.4440 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1372 - Bleu: 33.0232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/eng-fra-simcse.dev2.4440
hopkins
2023-07-05T21:32:34Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T21:12:42Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse.dev2.4440 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse.dev2.4440 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1146 - Bleu: 33.6862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
KevinQuijano/model
KevinQuijano
2023-07-05T21:12:27Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-05T14:32:19Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - KevinQuijano/model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
joydragon/Reinforce-Pixelcopter-PLE-v2
joydragon
2023-07-05T20:50:19Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T20:50:15Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.00 +/- 28.73 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
joydragon/Reinforce-Pixelcopter-PLE-v1
joydragon
2023-07-05T20:49:56Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T19:14:28Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 39.00 +/- 36.85 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
choward/csv
choward
2023-07-05T20:46:13Z
0
0
null
[ "text-generation", "region:us" ]
text-generation
2023-07-05T20:42:22Z
--- pipeline_tag: text-generation ---
egarciamartin/poca-SoccerTwos
egarciamartin
2023-07-05T20:40:50Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-05T20:40:07Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: egarciamartin/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dhiruHF/falcon7b-FT-DocQA-v2
dhiruHF
2023-07-05T20:39:12Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-05T20:39:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
vinson099/DatasetModel
vinson099
2023-07-05T20:34:01Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-05T18:00:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: DatasetModel results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: flower_photos split: train[:500] args: flower_photos metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DatasetModel This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6457 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.2651 | 0.99 | | 1.3875 | 1.92 | 12 | 0.7931 | 1.0 | | 1.3875 | 2.88 | 18 | 0.6457 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
SaffalPoosh/falcon_7B_instruct_safetensors
SaffalPoosh
2023-07-05T20:27:23Z
16
0
transformers
[ "transformers", "safetensors", "RefinedWebModel", "text-generation", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T20:13:30Z
Converted using oobabooga script to safetensors to test the TGI LLM inference engine
durdana/alpaca7B-lora
durdana
2023-07-05T20:25:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-05T20:25:31Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
jcm-art/hf_image_classification_tuning_pipeline
jcm-art
2023-07-05T20:14:07Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:food101", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-05T19:35:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: hf_image_classification_tuning_pipeline results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: train[:5000] args: default metrics: - name: Accuracy type: accuracy value: 0.903 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf_image_classification_tuning_pipeline This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.5764 - Accuracy: 0.903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7113 | 0.99 | 62 | 2.4840 | 0.849 | | 1.8024 | 2.0 | 125 | 1.7298 | 0.891 | | 1.5532 | 2.98 | 186 | 1.5764 | 0.903 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
jordyvl
2023-07-05T20:02:58Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-05T17:53:13Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0783 - Accuracy: 0.71 - Exit 0 Accuracy: 0.115 - Exit 1 Accuracy: 0.1575 - Exit 2 Accuracy: 0.185 - Exit 3 Accuracy: 0.0875 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7602 | 0.1125 | 0.0925 | 0.0675 | 0.0875 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7309 | 0.115 | 0.1175 | 0.0675 | 0.1075 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6967 | 0.1325 | 0.095 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6631 | 0.17 | 0.085 | 0.0575 | 0.1275 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6242 | 0.205 | 0.085 | 0.0575 | 0.1225 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5736 | 0.2175 | 0.0875 | 0.0825 | 0.12 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5410 | 0.215 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5229 | 0.2325 | 0.1 | 0.0925 | 0.13 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.4841 | 0.2525 | 0.1 | 0.1 | 0.1325 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4382 | 0.29 | 0.1 | 0.1025 | 0.1325 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.3823 | 0.3 | 0.1 | 0.1275 | 0.1325 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3389 | 0.3275 | 0.1 | 0.1175 | 0.1225 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.3002 | 0.35 | 0.0975 | 0.12 | 0.1225 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.2421 | 0.36 | 0.0975 | 0.125 | 0.1275 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.2026 | 0.3575 | 0.1025 | 0.13 | 0.125 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.1712 | 0.375 | 0.105 | 0.1375 | 0.125 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.0999 | 0.4075 | 0.1 | 0.145 | 0.125 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0414 | 0.4225 | 0.1025 | 0.145 | 0.1275 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 1.9981 | 0.4375 | 0.0975 | 0.1425 | 0.13 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.9369 | 0.4575 | 0.1025 | 0.14 | 0.1425 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.8903 | 0.4975 | 0.1025 | 0.14 | 0.145 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.8242 | 0.525 | 0.1025 | 0.1425 | 0.15 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.7520 | 0.5325 | 0.11 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.7203 | 0.5525 | 0.1125 | 0.1475 | 0.1525 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.6753 | 0.565 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.6245 | 0.575 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.5832 | 0.61 | 0.11 | 0.15 | 0.1525 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.5404 | 0.61 | 0.11 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.4958 | 0.6125 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.4613 | 0.6325 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 30.72 | 62 | 1.4479 | 0.63 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 | | No log | 31.72 | 64 | 1.4101 | 0.64 | 0.1125 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 32.72 | 66 | 1.3699 | 0.655 | 0.1125 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 33.72 | 68 | 1.3427 | 0.6725 | 0.115 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 34.72 | 70 | 1.3161 | 0.6825 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 35.72 | 72 | 1.2896 | 0.7025 | 0.115 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 36.72 | 74 | 1.2720 | 0.705 | 0.11 | 0.1525 | 0.185 | 0.0625 | 0.0625 | | No log | 37.72 | 76 | 1.2471 | 0.71 | 0.11 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 38.72 | 78 | 1.2307 | 0.71 | 0.11 | 0.155 | 0.1775 | 0.0625 | 0.0625 | | No log | 39.72 | 80 | 1.2174 | 0.7175 | 0.1125 | 0.155 | 0.1825 | 0.0625 | 0.0625 | | No log | 40.72 | 82 | 1.1991 | 0.705 | 0.1125 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 41.72 | 84 | 1.1867 | 0.71 | 0.1175 | 0.1525 | 0.18 | 0.065 | 0.0625 | | No log | 42.72 | 86 | 1.1764 | 0.7025 | 0.115 | 0.1525 | 0.18 | 0.0675 | 0.0625 | | No log | 43.72 | 88 | 1.1601 | 0.715 | 0.115 | 0.1525 | 0.1825 | 0.0725 | 0.0625 | | No log | 44.72 | 90 | 1.1410 | 0.7175 | 0.115 | 0.1525 | 0.18 | 0.075 | 0.0625 | | No log | 45.72 | 92 | 1.1408 | 0.71 | 0.115 | 0.155 | 0.1825 | 0.075 | 0.0625 | | No log | 46.72 | 94 | 1.1443 | 0.7075 | 0.115 | 0.155 | 0.1825 | 0.0775 | 0.0625 | | No log | 47.72 | 96 | 1.1364 | 0.705 | 0.115 | 0.155 | 0.1775 | 0.0825 | 0.0625 | | No log | 48.72 | 98 | 1.1251 | 0.71 | 0.115 | 0.155 | 0.175 | 0.085 | 0.0625 | | No log | 49.72 | 100 | 1.1113 | 0.7175 | 0.115 | 0.155 | 0.1775 | 0.085 | 0.0625 | | No log | 50.72 | 102 | 1.1040 | 0.7175 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 51.72 | 104 | 1.0972 | 0.715 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 52.72 | 106 | 1.0938 | 0.7175 | 0.115 | 0.1575 | 0.1825 | 0.0875 | 0.0625 | | No log | 53.72 | 108 | 1.0931 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 54.72 | 110 | 1.0887 | 0.7075 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 55.72 | 112 | 1.0865 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 56.72 | 114 | 1.0828 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 57.72 | 116 | 1.0801 | 0.7075 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 58.72 | 118 | 1.0786 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 59.72 | 120 | 1.0783 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
BadreddineHug/donut-base-ocr6
BadreddineHug
2023-07-05T20:01:09Z
72
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-05T19:35:16Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-ocr6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-ocr6 This model is a fine-tuned version of [BadreddineHug/donut-base-ocr4](https://huggingface.co/BadreddineHug/donut-base-ocr4) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
pszemraj/gpt2-medium-vaguely-human-dialogue
pszemraj
2023-07-05T19:57:49Z
15
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "gpt", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text-generation - gpt2 - gpt license: mit widget: - text: |+ Do you like my new haircut? person beta: example_title: haircut - text: |+ I love to learn new things.. are you willing to teach me something? person beta: example_title: teaching - text: |+ What's your favorite animal? Mine is the dog? person beta: example_title: favorite - text: |+ how much does it cost? person beta: example_title: money inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.6 no_repeat_ngram_size: 3 do_sample: true top_p: 0.85 top_k: 10 repetition_penalty: 2.1 pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pszemraj/gpt2-medium-vaguely-human-dialogue This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human. It achieves the following results on the evaluation set: - Loss: 4.3281 ## Model description - a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases. ## Intended uses & limitations - there are no intended uses ## Training and evaluation data - a parsed version of the wizard of Wikipedia dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 34.991 | 1.0 | 837 | 14.8359 | | 12.2881 | 2.0 | 1674 | 9.375 | | 8.5071 | 3.0 | 2511 | 7.2148 | | 7.6031 | 4.0 | 3348 | 6.1758 | | 6.4808 | 5.0 | 4185 | 5.5820 | | 5.8562 | 6.0 | 5022 | 5.0977 | | 5.6094 | 7.0 | 5859 | 4.8203 | | 5.2591 | 8.0 | 6696 | 4.5977 | | 5.0031 | 9.0 | 7533 | 4.4219 | | 4.8837 | 10.0 | 8370 | 4.3281 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0
khushpreet/eyedisease
khushpreet
2023-07-05T19:51:05Z
0
0
keras
[ "keras", "tf-keras", "medical", "image-classification", "arxiv:1910.09700", "region:us" ]
image-classification
2023-07-05T19:48:02Z
--- metrics: - accuracy library_name: keras pipeline_tag: image-classification tags: - medical --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rsilg/dqn-SpaceInvadersNoFrameskip-v4
rsilg
2023-07-05T19:40:58Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T19:40:29Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 541.50 +/- 118.85 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rsilg ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
aroot/wsample.43a
aroot
2023-07-05T19:38:28Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T18:34:22Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: wsample.43a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsample.43a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8306 - Bleu: 4.7146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
aroot/wsample.32a
aroot
2023-07-05T19:38:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-05T18:34:12Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: wsample.32a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsample.32a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8284 - Bleu: 4.7412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
Shezus/finetuning-sentiment-model-3000-samples
Shezus
2023-07-05T19:30:51Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-03T22:11:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8766666666666667 - name: F1 type: f1 value: 0.877076411960133 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3107 - Accuracy: 0.8767 - F1: 0.8771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
BadreddineHug/donut-base-ocr4
BadreddineHug
2023-07-05T19:27:19Z
74
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-05T18:38:04Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-ocr4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-ocr4 This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3