modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 18:33:19
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 18:33:14
card
stringlengths
11
1.01M
jhakaran1/bert-essay-concat
jhakaran1
2022-10-29T00:00:25Z
156
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-28T02:20:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-essay-concat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-essay-concat This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0735 - Accuracy: 0.6331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7024 | 1.0 | 3677 | 0.9159 | 0.6329 | | 0.6413 | 2.0 | 7354 | 1.0267 | 0.6346 | | 0.5793 | 3.0 | 11031 | 1.0735 | 0.6331 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
hakurei/bloom-1b1-arb-thesis
hakurei
2022-10-28T22:35:44Z
7
3
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-14T16:02:00Z
--- license: bigscience-bloom-rail-1.0 ---
christyli/vit-base-beans
christyli
2022-10-28T21:59:17Z
32
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-28T21:55:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-beans results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3930 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0349 | 1.0 | 17 | 0.8167 | 0.9323 | | 0.7502 | 2.0 | 34 | 0.6188 | 0.9699 | | 0.5508 | 3.0 | 51 | 0.4856 | 0.9774 | | 0.4956 | 4.0 | 68 | 0.4109 | 0.9774 | | 0.4261 | 5.0 | 85 | 0.3930 | 0.9774 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1+cu102 - Tokenizers 0.12.1
sd-concepts-library/urivoldemort
sd-concepts-library
2022-10-28T20:58:35Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-28T19:36:37Z
--- license: mit --- ### Urivoldemort on Stable Diffusion Create Uriboldemort images using any context. This was taught to Stable Diffusion via Textual Inversion. Use the `<uriboldemort>` placeholder in the text prompt. You can train your Concept using the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. For inference use this copy of the official [notebook](https://colab.research.google.com/drive/11bIGXVkQJ4bJTSQIjlxDg1OhhX7nDZ01?usp=sharing). Some outputs: ![<uriboldemort> 1](https://huggingface.co/sd-concepts-library/urivoldemort/resolve/main/urtivoldemort7.png) ![<uriboldemort> 2](https://huggingface.co/sd-concepts-library/urivoldemort/resolve/main/urtivoldemort3.png)
sd-concepts-library/anime-background-style-v2
sd-concepts-library
2022-10-28T19:56:39Z
0
24
null
[ "license:mit", "region:us" ]
null
2022-10-28T19:45:11Z
--- license: mit --- ### Anime Background style (v2) on Stable Diffusion This is the `<anime-background-style-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<anime-background-style-v2> 0](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/5.jpeg) ![<anime-background-style-v2> 1](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/13.jpeg) ![<anime-background-style-v2> 2](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/9.jpeg) ![<anime-background-style-v2> 3](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/10.jpeg) ![<anime-background-style-v2> 4](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/6.jpeg) ![<anime-background-style-v2> 5](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/4.jpeg) ![<anime-background-style-v2> 6](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/1.jpeg) ![<anime-background-style-v2> 7](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/3.jpeg) ![<anime-background-style-v2> 8](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/12.jpeg) ![<anime-background-style-v2> 9](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/2.jpeg) ![<anime-background-style-v2> 10](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/0.jpeg) ![<anime-background-style-v2> 11](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/7.jpeg) ![<anime-background-style-v2> 12](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/8.jpeg) ![<anime-background-style-v2> 13](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/11.jpeg) Here are images generated with this style: ![the facade of a café in the style of <anime-background-style-v2>](https://i.imgur.com/EE89tm9.png) ![painting of a lush jungle in the style of <anime-background-style-v2>](https://i.imgur.com/peoQF5n.png) ![urban street with brownstones in the style of <anime-background-style-v2>](https://i.imgur.com/zuFgFP9.png) ![wide angle image of a castle made of ice in the style of <anime-background-style-v2>](https://i.imgur.com/uyopxyv.png)
kyle-lucke/autotrain-planes-1918465011
kyle-lucke
2022-10-28T19:42:45Z
3
0
transformers
[ "transformers", "joblib", "autotrain", "tabular", "classification", "tabular-classification", "dataset:kyle-lucke/autotrain-data-planes", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
tabular-classification
2022-10-28T19:42:12Z
--- tags: - autotrain - tabular - classification - tabular-classification datasets: - kyle-lucke/autotrain-data-planes co2_eq_emissions: emissions: 0.19811345350195664 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1918465011 - CO2 Emissions (in grams): 0.1981 ## Validation Metrics - Loss: 0.011 - Accuracy: 0.997 - Macro F1: 0.916 - Micro F1: 0.997 - Weighted F1: 0.996 - Macro Precision: 0.999 - Micro Precision: 0.997 - Weighted Precision: 0.997 - Macro Recall: 0.867 - Micro Recall: 0.997 - Weighted Recall: 0.997 ## Usage ```python import json import joblib import pandas as pd model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] data.columns = ["feat_" + str(col) for col in data.columns] predictions = model.predict(data) # or model.predict_proba(data) ```
hsuvaskakoty/bart_def_gen_40k
hsuvaskakoty
2022-10-28T19:18:37Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-26T17:53:02Z
This is a fine-tuned BART model for Definition Generation. It is still in the prototype stage, fine-tuned only with 40k Training Instances of (definition, context) pairs for 3 epochs. The eval_loss is still in 2.30. The beam Size is 4.
ViktorDo/SciBERT-POWO_Lifecycle_Finetuned
ViktorDo
2022-10-28T19:12:38Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-28T18:06:36Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-POWO_Lifecycle_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-POWO_Lifecycle_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0899 | 1.0 | 1704 | 0.0795 | | 0.0845 | 2.0 | 3408 | 0.0836 | | 0.0684 | 3.0 | 5112 | 0.0812 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
leslyarun/grammatical-error-correction-quantized
leslyarun
2022-10-28T17:55:05Z
14
1
transformers
[ "transformers", "onnx", "t5", "text2text-generation", "grammar", "en", "dataset:leslyarun/c4_200m_gec_train100k_test25k", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-28T13:10:29Z
--- language: en tags: - grammar - text2text-generation datasets: - leslyarun/c4_200m_gec_train100k_test25k --- # Get Grammatical corrections on your English text, trained on a subset of c4-200m dataset - ONNX Quantized Model # Use the below code for running the model ``` python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForSeq2SeqLM from optimum.pipelines import pipeline tokenizer = AutoTokenizer.from_pretrained("leslyarun/grammatical-error-correction-quantized") model = ORTModelForSeq2SeqLM.from_pretrained("leslyarun/grammatical-error-correction-quantized", encoder_file_name="encoder_model_quantized.onnx", decoder_file_name="decoder_model_quantized.onnx", decoder_with_past_file_name="decoder_with_past_model_quantized.onnx") text2text_generator = pipeline("text2text-generation", model=model, tokenizer=tokenizer) output = text2text_generator("grammar: " + sentence) print(output[0]["generated_text"]) ```
ybelkada/switch-base-8-xsum
ybelkada
2022-10-28T17:54:45Z
12
3
transformers
[ "transformers", "pytorch", "switch_transformers", "text2text-generation", "en", "dataset:c4", "dataset:xsum", "arxiv:2101.03961", "arxiv:2210.11416", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-28T13:29:07Z
--- language: - en tags: - text2text-generation widget: - text: "summarize: Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving." example_title: "Summarization" datasets: - c4 - xsum license: apache-2.0 --- # Model Card for Switch Transformers Base - 8 experts ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666966931908-62441d1d9fdefb55a0b7d12c.png) # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) # TL;DR Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks. As mentioned in the first few lines of the abstract : > we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2101.03961.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers) # Usage Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing) Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto", torch_dtype=torch.float16) input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> # Uses ## Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Ethical considerations and risks More information needed. ## Known Limitations More information needed. ## Sensitive Use: > SwitchTransformers should not be applied for any unacceptable use cases, e.g., generation of abusive speech. # Training Details ## Training Data The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`. ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained SwitchTransformers and are not fine-tuned. It is normal if they perform well on zero-shot tasks. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1666967660372-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf). ## Results For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2101.03961, doi = {10.48550/ARXIV.2101.03961}, url = {https://arxiv.org/abs/2101.03961}, author = {Fedus, William and Zoph, Barret and Shazeer, Noam}, keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity}, publisher = {arXiv}, year = {2021}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
ivanzidov/setfit-occupation
ivanzidov
2022-10-28T17:48:19Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-28T11:39:19Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 125000 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 125000, "warmup_steps": 12500, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
leo93/all-15-bert-finetuned-ner
leo93
2022-10-28T17:47:37Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T01:42:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: all-15-bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-15-bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0081 - Precision: 0.9630 - Recall: 0.9661 - F1: 0.9646 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.014 | 1.0 | 6693 | 0.0080 | 0.9048 | 0.9363 | 0.9203 | 0.9976 | | 0.007 | 2.0 | 13386 | 0.0070 | 0.9116 | 0.9459 | 0.9284 | 0.9976 | | 0.0034 | 3.0 | 20079 | 0.0050 | 0.9514 | 0.9529 | 0.9522 | 0.9985 | | 0.0027 | 4.0 | 26772 | 0.0065 | 0.9360 | 0.9618 | 0.9487 | 0.9982 | | 0.002 | 5.0 | 33465 | 0.0062 | 0.9485 | 0.9555 | 0.9520 | 0.9984 | | 0.0008 | 6.0 | 40158 | 0.0069 | 0.9498 | 0.9468 | 0.9483 | 0.9983 | | 0.0013 | 7.0 | 46851 | 0.0059 | 0.9591 | 0.9618 | 0.9605 | 0.9987 | | 0.0007 | 8.0 | 53544 | 0.0072 | 0.9635 | 0.9594 | 0.9614 | 0.9986 | | 0.0003 | 9.0 | 60237 | 0.0076 | 0.9656 | 0.9621 | 0.9638 | 0.9987 | | 0.0006 | 10.0 | 66930 | 0.0080 | 0.9598 | 0.9625 | 0.9611 | 0.9986 | | 0.0007 | 11.0 | 73623 | 0.0072 | 0.9584 | 0.9651 | 0.9618 | 0.9986 | | 0.0 | 12.0 | 80316 | 0.0073 | 0.9606 | 0.9658 | 0.9632 | 0.9987 | | 0.0001 | 13.0 | 87009 | 0.0072 | 0.9649 | 0.9636 | 0.9642 | 0.9987 | | 0.0 | 14.0 | 93702 | 0.0078 | 0.9629 | 0.9665 | 0.9647 | 0.9987 | | 0.0 | 15.0 | 100395 | 0.0081 | 0.9630 | 0.9661 | 0.9646 | 0.9987 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
yubol/all-15-bert-finetuned-ner
yubol
2022-10-28T17:47:37Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T01:42:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: all-15-bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-15-bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0081 - Precision: 0.9630 - Recall: 0.9661 - F1: 0.9646 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.014 | 1.0 | 6693 | 0.0080 | 0.9048 | 0.9363 | 0.9203 | 0.9976 | | 0.007 | 2.0 | 13386 | 0.0070 | 0.9116 | 0.9459 | 0.9284 | 0.9976 | | 0.0034 | 3.0 | 20079 | 0.0050 | 0.9514 | 0.9529 | 0.9522 | 0.9985 | | 0.0027 | 4.0 | 26772 | 0.0065 | 0.9360 | 0.9618 | 0.9487 | 0.9982 | | 0.002 | 5.0 | 33465 | 0.0062 | 0.9485 | 0.9555 | 0.9520 | 0.9984 | | 0.0008 | 6.0 | 40158 | 0.0069 | 0.9498 | 0.9468 | 0.9483 | 0.9983 | | 0.0013 | 7.0 | 46851 | 0.0059 | 0.9591 | 0.9618 | 0.9605 | 0.9987 | | 0.0007 | 8.0 | 53544 | 0.0072 | 0.9635 | 0.9594 | 0.9614 | 0.9986 | | 0.0003 | 9.0 | 60237 | 0.0076 | 0.9656 | 0.9621 | 0.9638 | 0.9987 | | 0.0006 | 10.0 | 66930 | 0.0080 | 0.9598 | 0.9625 | 0.9611 | 0.9986 | | 0.0007 | 11.0 | 73623 | 0.0072 | 0.9584 | 0.9651 | 0.9618 | 0.9986 | | 0.0 | 12.0 | 80316 | 0.0073 | 0.9606 | 0.9658 | 0.9632 | 0.9987 | | 0.0001 | 13.0 | 87009 | 0.0072 | 0.9649 | 0.9636 | 0.9642 | 0.9987 | | 0.0 | 14.0 | 93702 | 0.0078 | 0.9629 | 0.9665 | 0.9647 | 0.9987 | | 0.0 | 15.0 | 100395 | 0.0081 | 0.9630 | 0.9661 | 0.9646 | 0.9987 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Arklyn/fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-test
Arklyn
2022-10-28T16:22:36Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_10_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-07T04:10:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3076 - Wer: 0.2971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 7 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 14 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9436 | 9.99 | 570 | 2.7467 | 1.0 | | 1.0498 | 19.99 | 1140 | 0.3630 | 0.3965 | | 0.6789 | 29.99 | 1710 | 0.3396 | 0.3712 | | 0.5259 | 39.99 | 2280 | 0.3204 | 0.3241 | | 0.4701 | 49.99 | 2850 | 0.3118 | 0.3005 | | 0.4248 | 59.99 | 3420 | 0.3076 | 0.2971 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.10.0+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
tlttl/tluo_xml_roberta_base_amazon_review_sentiment
tlttl
2022-10-28T15:51:48Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-28T07:26:12Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: tluo_xml_roberta_base_amazon_review_sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tluo_xml_roberta_base_amazon_review_sentiment This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9552 - Accuracy: 0.6003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5664 | 0.33 | 5000 | 1.3816 | 0.5688 | | 0.9494 | 0.67 | 10000 | 0.9702 | 0.5852 | | 0.9613 | 1.0 | 15000 | 0.9545 | 0.5917 | | 0.8611 | 1.33 | 20000 | 0.9689 | 0.5953 | | 0.8636 | 1.67 | 25000 | 0.9556 | 0.5943 | | 0.8582 | 2.0 | 30000 | 0.9552 | 0.6003 | | 0.7555 | 2.33 | 35000 | 1.0001 | 0.5928 | | 0.7374 | 2.67 | 40000 | 1.0037 | 0.594 | | 0.733 | 3.0 | 45000 | 0.9976 | 0.5983 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
PabloZubeldia/distilbert-base-uncased-finetuned-tweets
PabloZubeldia
2022-10-28T15:33:38Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T21:15:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-tweets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tweets This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2703 - Accuracy: 0.9068 - F1: 0.9081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3212 | 1.0 | 143 | 0.2487 | 0.8989 | 0.8991 | | 0.2031 | 2.0 | 286 | 0.2268 | 0.9077 | 0.9074 | | 0.1474 | 3.0 | 429 | 0.2385 | 0.9094 | 0.9107 | | 0.1061 | 4.0 | 572 | 0.2516 | 0.9103 | 0.9111 | | 0.0804 | 5.0 | 715 | 0.2703 | 0.9068 | 0.9081 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
ajankelo/pklot_small_model
ajankelo
2022-10-28T14:32:23Z
0
0
null
[ "PyTorch", "vfnet", "icevision", "en", "license:mit", "region:us" ]
null
2022-10-27T21:11:41Z
--- language: en license: mit tags: - PyTorch - vfnet - icevision --- # Small PKLot This model is trained on a subset of the PKLot dataset ( first introduced in this paper [here](https://www.inf.ufpr.br/lesoliveira/download/ESWA2015.pdf)). The subset is comprised of 50 fully annotated images for training. ## Citation for original dataset Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
alefarasin/ppo-CartPole-v1
alefarasin
2022-10-28T13:06:55Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-10-28T13:06:49Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 155.80 +/- 45.55 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8 # Hyperparameters ```python {'exp_name': 'dummy_name' 'f': '/root/.local/share/jupyter/runtime/kernel-e1e9a3a5-8345-4438-b691-f71df9c2a28b.json' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'alefarasin/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
gokul-g-menon/xls-r_fine_tuned
gokul-g-menon
2022-10-28T13:01:13Z
74
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-26T16:47:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xls-r_fine_tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r_fine_tuned This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Rocketknight1/temp_upload_test
Rocketknight1
2022-10-28T12:29:16Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-28T12:28:55Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/temp_upload_test results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/temp_upload_test This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6858 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.6858 | 0 | ### Framework versions - Transformers 4.24.0.dev0 - TensorFlow 2.10.0 - Datasets 2.6.1 - Tokenizers 0.11.0
ayushtiwari/bert-finetuned-ner
ayushtiwari
2022-10-28T12:28:11Z
11
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-27T20:58:57Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ayushtiwari/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayushtiwari/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0271 - Validation Loss: 0.0549 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1723 | 0.0643 | 0 | | 0.0465 | 0.0564 | 1 | | 0.0271 | 0.0549 | 2 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.10.0 - Datasets 2.6.1 - Tokenizers 0.13.1
teacookies/autotrain-28102022-cert2-1916264970
teacookies
2022-10-28T12:26:46Z
13
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-28102022-cert2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T12:15:55Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-28102022-cert2 co2_eq_emissions: emissions: 17.982023070008026 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1916264970 - CO2 Emissions (in grams): 17.9820 ## Validation Metrics - Loss: 0.002 - Accuracy: 1.000 - Precision: 0.980 - Recall: 0.986 - F1: 0.983 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-28102022-cert2-1916264970 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-28102022-cert2-1916264970", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-28102022-cert2-1916264970", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ashish23993/t5-small-finetuned-xsum-ashish
ashish23993
2022-10-28T11:53:09Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-28T11:49:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-xsum-ashish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-ashish This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 8 | 2.2555 | 21.098 | 9.1425 | 17.7091 | 19.9721 | 19.0 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
ivanzidov/my-awesome-setfit-model
ivanzidov
2022-10-28T10:31:46Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-28T10:25:42Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
roa7n/DNABert_K6_G_quad_3
roa7n
2022-10-28T10:04:20Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-28T08:21:20Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: DNABert_K6_G_quad_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DNABert_K6_G_quad_3 This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0722 - Accuracy: 0.9761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0912 | 1.0 | 9375 | 0.0883 | 0.9707 | | 0.0668 | 2.0 | 18750 | 0.0723 | 0.9757 | | 0.0598 | 3.0 | 28125 | 0.0722 | 0.9761 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
caskcsg/cotmae_base_msmarco_retriever
caskcsg
2022-10-28T08:30:08Z
105
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "arxiv:2208.07670", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-28T08:01:25Z
--- pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - transformers --- # CoT-MAE MS-Marco Passage Retriever CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval. **CoT-MAE MS-Marco Passage Retriever** is a retriever trained with BM25 hard negatives and CoT-MAE retriever mined MS-Marco hard negatives using [Tevatron](github.com/texttron/tevatron) toolkit. Specifically, we trained a stage-one retriever using BM25 HN, using stage-one retriever to mine HN, then trained a stage-two retriever using both BM25 HN & stage-one retriever mined hn. The release is the stage-two retriever. Details can be found in our paper and codes. Paper: [ConTextual Mask Auto-Encoder for Dense Passage Retrieval](https://arxiv.org/abs/2208.07670). Code: [caskcsg/ir/cotmae](https://github.com/caskcsg/ir/tree/main/cotmae) ## Scores ### MS-Marco Passage full-ranking | MRR @10 | recall@1 | recall@50 | recall@1k | QueriesRanked | |----------|----------|-----------|-----------|----------------| | 0.394431 | 0.265903 | 0.870344 | 0.986676 | 6980 | ## Citations If you find our work useful, please cite our paper. ```bibtex @misc{https://doi.org/10.48550/arxiv.2208.07670, doi = {10.48550/ARXIV.2208.07670}, url = {https://arxiv.org/abs/2208.07670}, author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
XaviXva/distilbert-base-uncased-finetuned-paws
XaviXva
2022-10-28T08:14:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:pawsx", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T09:59:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pawsx metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-paws results: - task: name: Text Classification type: text-classification dataset: name: pawsx type: pawsx args: en metrics: - name: Accuracy type: accuracy value: 0.8355 - name: F1 type: f1 value: 0.8361579553422098 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-paws This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pawsx dataset. It achieves the following results on the evaluation set: - Loss: 0.3850 - Accuracy: 0.8355 - F1: 0.8362 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6715 | 1.0 | 772 | 0.5982 | 0.6785 | 0.6799 | | 0.4278 | 2.0 | 1544 | 0.3850 | 0.8355 | 0.8362 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
teacookies/autotrain-28102022-1914864930
teacookies
2022-10-28T07:41:13Z
11
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-28102022", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T07:30:27Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-28102022 co2_eq_emissions: emissions: 19.19485186697524 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1914864930 - CO2 Emissions (in grams): 19.1949 ## Validation Metrics - Loss: 0.002 - Accuracy: 1.000 - Precision: 0.982 - Recall: 0.984 - F1: 0.983 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-28102022-1914864930 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-28102022-1914864930", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-28102022-1914864930", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ComCom/gpt2-small
ComCom
2022-10-28T05:53:14Z
273
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "exbert", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-28T05:43:05Z
--- language: en tags: - exbert license: mit --- This repository has been forked from https://huggingface.co/gpt2 --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
bpatwa-shi/bert-finetuned-ner
bpatwa-shi
2022-10-28T05:22:16Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T03:37:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9333113238692637 - name: Recall type: recall value: 0.9515314708852238 - name: F1 type: f1 value: 0.9423333333333334 - name: Accuracy type: accuracy value: 0.9870636368988049 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0587 - Precision: 0.9333 - Recall: 0.9515 - F1: 0.9423 - Accuracy: 0.9871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.086 | 1.0 | 1756 | 0.0634 | 0.9186 | 0.9364 | 0.9274 | 0.9829 | | 0.0372 | 2.0 | 3512 | 0.0598 | 0.9328 | 0.9478 | 0.9402 | 0.9860 | | 0.0217 | 3.0 | 5268 | 0.0587 | 0.9333 | 0.9515 | 0.9423 | 0.9871 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.10.2 - Datasets 2.6.1 - Tokenizers 0.13.1
doodlevelyn/roberta-base
doodlevelyn
2022-10-28T04:44:44Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T00:00:46Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3953 - Precision: 0.5295 - Recall: 0.2861 - F1: 0.3715 - Accuracy: 0.9648 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0001 | 1.0 | 7365 | 0.4150 | 0.4892 | 0.1876 | 0.2712 | 0.9603 | | 0.0 | 2.0 | 14730 | 0.4248 | 0.6005 | 0.2399 | 0.3428 | 0.9638 | | 0.0 | 3.0 | 22095 | 0.3953 | 0.5295 | 0.2861 | 0.3715 | 0.9648 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/shinononetu
huggingtweets
2022-10-28T04:43:17Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-28T04:42:41Z
--- language: en thumbnail: http://www.huggingtweets.com/shinononetu/1666932192965/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1381323487499980806/i2qeW2Qi_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Netu</div> <div style="text-align: center; font-size: 14px;">@shinononetu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Netu. | Data | Netu | | --- | --- | | Tweets downloaded | 1912 | | Retweets | 627 | | Short tweets | 453 | | Tweets kept | 832 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38lbhqc9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shinononetu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tj5n1bk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tj5n1bk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shinononetu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lixiangchun/imagenet-swav-resnet50w2
lixiangchun
2022-10-28T04:13:37Z
0
0
tf-keras
[ "tf-keras", "onnx", "region:us" ]
null
2022-10-20T04:06:01Z
```python import trace_layer2 as models import torch x=torch.randn(1, 3, 224, 224) state_dict = torch.load('swav_imagenet_layer2.pt', map_location='cpu') model = models.resnet50w2() model.load_state_dict(state_dict) model.eval() feature = model(x) traced_model = torch.jit.load('traced_swav_imagenet_layer2.pt', map_location='cpu') traced_model.eval() feature = traced_model(x) ```
huggingtweets/missalykatt
huggingtweets
2022-10-28T02:37:20Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-28T02:34:18Z
--- language: en thumbnail: http://www.huggingtweets.com/missalykatt/1666924619450/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1556386443752222720/Fzb-hZ4Q_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MissAlyKatt 🏳️‍🌈♀️</div> <div style="text-align: center; font-size: 14px;">@missalykatt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MissAlyKatt 🏳️‍🌈♀️. | Data | MissAlyKatt 🏳️‍🌈♀️ | | --- | --- | | Tweets downloaded | 3217 | | Retweets | 361 | | Short tweets | 757 | | Tweets kept | 2099 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yaoalt1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @missalykatt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uetdofk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uetdofk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/missalykatt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
helloway/simple
helloway
2022-10-28T02:00:19Z
0
0
null
[ "audio-classification", "license:apache-2.0", "region:us" ]
audio-classification
2022-10-28T01:51:37Z
--- license: apache-2.0 tags: - audio-classification ---
Sunny5353/distilbert-base-uncased-finetuned-imdb
Sunny5353
2022-10-28T01:40:18Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T01:29:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.76 | 1.0 | 157 | 0.6640 | | 0.688 | 2.0 | 314 | 0.6581 | | 0.6768 | 3.0 | 471 | 0.6604 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Gub/model_gub_v2
Gub
2022-10-28T00:58:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-10-20T05:17:16Z
--- license: creativeml-openrail-m ---
OpenMatch/cocodr-large-msmarco-idro-only
OpenMatch
2022-10-28T00:45:35Z
105
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T00:42:33Z
--- license: mit --- This model has been pretrained on MS MARCO corpus and then finetuned on MS MARCO training data with implicit distributionally robust optimization (iDRO), following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
TingChenChang/t5-end2end-questions-generation
TingChenChang
2022-10-28T00:36:02Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-27T14:37:17Z
--- tags: - generated_from_trainer model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [TingChenChang/t5-end2end-questions-generation](https://huggingface.co/TingChenChang/t5-end2end-questions-generation) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5711 | 0.4 | 100 | 1.6119 | | 1.5353 | 0.8 | 200 | 1.6052 | | 1.502 | 1.2 | 300 | 1.6082 | | 1.4525 | 1.6 | 400 | 1.5918 | | 1.4463 | 2.0 | 500 | 1.5847 | | 1.3885 | 2.4 | 600 | 1.6236 | | 1.4029 | 2.8 | 700 | 1.5962 | | 1.3947 | 3.2 | 800 | 1.5932 | | 1.3685 | 3.6 | 900 | 1.5898 | | 1.3926 | 4.0 | 1000 | 1.5624 | | 1.4666 | 4.4 | 1100 | 1.5535 | | 1.4573 | 4.8 | 1200 | 1.5483 | | 1.4342 | 5.2 | 1300 | 1.5449 | | 1.4281 | 5.6 | 1400 | 1.5347 | | 1.4031 | 6.0 | 1500 | 1.5456 | | 1.375 | 6.4 | 1600 | 1.5375 | | 1.3867 | 6.8 | 1700 | 1.5393 | | 1.3763 | 7.2 | 1800 | 1.5401 | | 1.357 | 7.6 | 1900 | 1.5361 | | 1.3568 | 8.0 | 2000 | 1.5295 | | 1.3503 | 8.4 | 2100 | 1.5377 | | 1.3335 | 8.8 | 2200 | 1.5353 | | 1.3416 | 9.2 | 2300 | 1.5288 | | 1.3179 | 9.6 | 2400 | 1.5324 | | 1.3276 | 10.0 | 2500 | 1.5291 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu102 - Datasets 2.6.1 - Tokenizers 0.12.1
caffsean/bert-base-cased-deep-ritmo
caffsean
2022-10-28T00:17:00Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T03:19:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-deep-ritmo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-deep-ritmo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.0463 | 1.0 | 1875 | 3.7428 | | 3.3393 | 2.0 | 3750 | 3.0259 | | 2.7435 | 3.0 | 5625 | 2.5837 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
allenai/scirepeval_adapters_qry
allenai
2022-10-28T00:06:24Z
12
1
adapter-transformers
[ "adapter-transformers", "adapterhub:scirepeval/adhoc_search", "bert", "dataset:allenai/scirepeval", "region:us" ]
null
2022-10-28T00:06:13Z
--- tags: - adapterhub:scirepeval/adhoc_search - adapter-transformers - bert datasets: - allenai/scirepeval --- # Adapter `allenai/scirepeval_adapters_qry` for malteos/scincl An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/adhoc_search](https://adapterhub.ml/explore/scirepeval/adhoc_search/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("malteos/scincl") adapter_name = model.load_adapter("allenai/scirepeval_adapters_qry", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
OpenMatch/co-condenser-large
OpenMatch
2022-10-28T00:03:42Z
33
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T23:56:37Z
--- license: mit --- This model has been pretrained on MS MARCO following the approach described in the paper **Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval**. The model can be used to reproduce the experimental results within the GitHub repository https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
allenai/scirepeval_adapters_clf
allenai
2022-10-28T00:03:35Z
14
0
adapter-transformers
[ "adapter-transformers", "adapterhub:scirepeval/classification", "bert", "dataset:allenai/scirepeval", "region:us" ]
null
2022-10-28T00:03:26Z
--- tags: - adapterhub:scirepeval/classification - adapter-transformers - bert datasets: - allenai/scirepeval --- # Adapter `allenai/scirepeval_adapters_clf` for malteos/scincl An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/classification](https://adapterhub.ml/explore/scirepeval/classification/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("malteos/scincl") adapter_name = model.load_adapter("allenai/scirepeval_adapters_clf", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
huggingtweets/f1_nn0
huggingtweets
2022-10-27T23:52:43Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-27T23:51:36Z
--- language: en thumbnail: http://www.huggingtweets.com/f1_nn0/1666914758812/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1207307756610445315/5rbKIvN6_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">chilgar</div> <div style="text-align: center; font-size: 14px;">@f1_nn0</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from chilgar. | Data | chilgar | | --- | --- | | Tweets downloaded | 1284 | | Retweets | 56 | | Short tweets | 384 | | Tweets kept | 844 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6g2hbq09/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @f1_nn0's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34clozqp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34clozqp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/f1_nn0') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
wavymulder/zelda-diffusion-HN
wavymulder
2022-10-27T21:32:27Z
0
18
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-10-25T01:06:42Z
--- license: creativeml-openrail-m --- **Zelda Diffusion - Hypernet** [*DOWNLOAD LINK*](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/zeldaBOTW.pt) - This is a hypernet trained on screenshots of Princess Zelda from BOTW ![Basic Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/zeldaNet-example_websize.jpg) Here's a random batch of 9 images to show the hypernet uncherrypicked. The prompt is "anime princess zelda volumetric lighting" and the negative prompt is "cel render 3d animation" ![Random 9](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/batchof9_websize.jpg) and [a link to more](https://i.imgur.com/NixQGid.jpg) --- Tips: You'll want to adjust the hypernetwork strength depending on what style you're trying to put Zelda into. I usually keep it at 80% strength and go from there. This hypernetwork helps make Zelda look more like the BOTW Zelda. You still have to prompt for what you want. Extra weight might sometimes need to be applied to get her to wear costumes. You may also have luck putting her name closer to the end of the prompt than you normally would. Since the hypernetwork is trained on screenshots from the videogame, it imparts a heavy Cel Shading effect [(Example here)](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/00108-920950.png). You can minimize this by negative prompting "cel". I believe every example posted here uses this. The hypernet can be used either with very simple prompting, as shown above, or a prompt of your favourite artists. ![Artists Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/anime_example.jpg) You can put this hypernet on top of different models to create some really cool Zeldas, such as this one made with [Nitrosocke](https://huggingface.co/nitrosocke)'s [Modern Disney Model](https://huggingface.co/nitrosocke/modern-disney-diffusion). ![Modern Disney Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/modernDisney%20example.png)
Aadarsh/bert-finetuned-ner
Aadarsh
2022-10-27T21:31:02Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T22:08:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1429 - Precision: 0.4954 - Recall: 0.6136 - F1: 0.5482 - Accuracy: 0.9642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 141 | 0.2894 | 0.4649 | 0.3258 | 0.3831 | 0.9219 | | No log | 2.0 | 282 | 0.1767 | 0.4706 | 0.4545 | 0.4624 | 0.9487 | | No log | 3.0 | 423 | 0.1429 | 0.4954 | 0.6136 | 0.5482 | 0.9642 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
OpenMatch/cocodr-base-msmarco-idro-only
OpenMatch
2022-10-27T21:26:19Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-27T21:21:56Z
--- license: mit --- This model has been pretrained on MS MARCO corpus and then finetuned on MS MARCO training data with implicit distributionally robust optimization (iDRO), following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-base as the backbone with 110M hyperparameters.
ViktorDo/SciBERT-POWO_Epiphyte_Finetuned
ViktorDo
2022-10-27T21:10:45Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T19:53:27Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-POWO_Epiphyte_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-POWO_Epiphyte_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0898 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0909 | 1.0 | 2063 | 0.0860 | | 0.0763 | 2.0 | 4126 | 0.1000 | | 0.0627 | 3.0 | 6189 | 0.0898 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Phantasion/phaninc
Phantasion
2022-10-27T21:03:33Z
0
1
null
[ "region:us" ]
null
2022-10-27T20:18:49Z
![robot dog](https://i.imgur.com/rLq8IdH.png "robot dog") Phaninc is a model based on my cyberpunk tumblr blog phantasyinc. One thing that has frustrated me with AI art is the generic quality of prompting for cyberpunk imagery, so I went through my blog and curated a dataset for 25 new keywords to get the results I desire. I have been heavily inspired by the work of nousr on robodiffusion whose model gave me a lot of results I love. I have utilised the new FAST dreambooth method, and run it at 20000 steps on 684 images (around 800 steps per concept). At the time of writing the model is still training but I thought I would use my training time to summarise my intent with each keyword. I expect there to be problems and some of my experiments to not pan out so well, but I thought I would share. *Post training update: the entire model is contaminated, most prompts are gonna churn out cyberpunk work, but the keywords are still good against one another and work as desired, and the base model has had some interesting lessons taught to it.* **phanborg** This set was the first to be tested, it is a combination of portraits of cyborgs much like phancyborg and phandroid. The difference between the three is that phanborg uses a combination of images with the face covered and uncovered by machinery, while phancyborg uses only uncovered cyborgs and phandroid only covered cyborgs. The images used in all three are entirely different so that I can play with a diversity of trained features with my keywords. **phanbrutal** Images I consider a combination of cyberpunk and brutalism. **phanbw** This one is one of my more experimental keywords, utilising monochrome cyberpunk images I find quite striking in black and white. However apart from sticking to a cyberpunk theme, there is no consistent subject matter and may just end up being a generic monochrome keyword. **phancircle** another experimental keyword, this keyword utilises a selection of architectural, textural and 3d design images with circles and spheres as a recurring motif. My hope is this keyword will help provide a cyberpunk texture to other prompts with a circular motif. **phancity** Bleak futuristic cityscapes, but like phanbw this experiment may fail due to being too varied subject matter. **phanconcrete** concrete, images of architecture with mostly concrete finishes, might be overkill with phanbrutal above, but I like that there will still be nuanced differences to play with. **phanconsole** A command centre needs buttons to beep and switches to boop, this keyword is all about screens and buttons. **phancorridor** images of spaceship corridors and facilities to provide a more futuristic interior design. **phancyborg** phancyborg is an image selection of cyborgs with some or all of a human face uncovered. **phandraw** a selection focused on drawn cyberpunk artwork with bright neon colors and defined linework **phandroid** this is where I pay most homage to nousrs robodiffusion, using only cyborgs with their faces concealed or just plain humanoid robots **phandustrial** futuristic ndustrial imagery of pipes wires and messes of cables. **phanfashion** trying to get that urbanwear hoodie look but with some variations. **phanfem** a series of cyberpunk women **phanglitch** Glitch art I had reblogged on the blog with a cyberpunk feel. Quite colorful. **phangrunge** Dilapidated dens for the scum of the city. Hopefully will add a good dose of urban decay to your prompt. **phanlogo** Sleek graphic design, typography and logos. **phanmachine** Built with unclear subject matter, phanmachine focuses on the details of futuristic shiny machinery in hopes of it coming out as a style or texture that can be applied in prompts. **phanmecha** The three cyborg keywords are sleek and humanoid, phanmecha focuses more on creating unique robot bodytypes. **phanmilitary** Future soldiers, man and machine. Likely to attach a gun to your prompt's character. **phanneon** Bright neon lights taking over the scene, this feature is what annoyed me with a lot of cyberpunk prompts in ai models. Overall I have it pretty isolated to this keyword, if you want those futuristic glowies. **phanrooms** Totally seperate to the rest of the theming, phanrooms is trained on backrooms and liminal space imagery. Which like cyberpunk is of high visual interest to me, and something the base model can sometimes struggle with. **phansterile** This is like cyberpunk cleancore, lots of white, very clean, clinical theming. **phantex** I don't know why latex outfits are cyberpunk but they just are, these images were selected for the accessorising on top of just the latex outfits. **phanture** Abstract textures that were cyberpunk enough for me to put on my blog.
YurtsAI/yurts-python-code-gen-30-sparse
YurtsAI
2022-10-27T20:39:18Z
560
19
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-24T22:22:16Z
--- license: bsd-3-clause --- # Maverick (Yurt's Python Code Generation Model) ## Model description This code generation model was fine-tuned on Python code from a generic multi-language code generation model. This model was then pushed to 30% sparsity using Yurts' in-house technology without performance loss. In this specific instance, the class representation for the network is still dense. This particular model has 350M trainable parameters. ## Training data This model was tuned on a subset of the Python data available in the BigQuery open-source [Github dataset](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code). ## How to use The model is great at autocompleting based off of partially generated function signatures and class signatures. It is also decent at generating code base based off of natural language prompts with a comment. If you find something cool you can do with the model, be sure to share it with us! Check out our [colab notebook](https://colab.research.google.com/drive/1NDO4X418HuPJzF8mFc6_ySknQlGIZMDU?usp=sharing) to see how to invoke the model and try it out. ## Feedback and Questions Have any questions or feedback? Find us on [Discord](https://discord.gg/2x4rmSGER9).
andrewzhang505/sf2-lunar-lander
andrewzhang505
2022-10-27T19:51:07Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T19:50:47Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 126.58 +/- 137.36 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLanderContinuous-v2 type: LunarLanderContinuous-v2 --- A(n) **APPO** model trained on the **LunarLanderContinuous-v2** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
orlcast/layoutxlm-finetuned-xfund-it-re
orlcast
2022-10-27T19:29:47Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "generated_from_trainer", "dataset:xfun", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-10-20T13:37:37Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - xfun metrics: - precision - recall - f1 model-index: - name: layoutxlm-finetuned-xfund-it-re results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutxlm-finetuned-xfund-it-re This model is a fine-tuned version of [orlcast/layoutxlm-finetuned-xfund-it-re](https://huggingface.co/orlcast/layoutxlm-finetuned-xfund-it-re) on the xfun dataset. It achieves the following results on the evaluation set: - Precision: 0.5092 - Recall: 0.7450 - F1: 0.6050 - Loss: 0.0020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 4000 ### Training results ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.6.1 - Tokenizers 0.12.1
sam34738/roberta-nisha
sam34738
2022-10-27T19:29:33Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T19:03:16Z
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-nisha results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-nisha This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3254 | 1.0 | 460 | 0.7247 | | 0.5791 | 2.0 | 920 | 0.5375 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.1
hoodhahmed/dhivehi_corpus
hoodhahmed
2022-10-27T18:59:43Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-10-27T18:59:43Z
--- license: bigscience-openrail-m ---
sam34738/roberta-kabita
sam34738
2022-10-27T18:33:31Z
161
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T18:13:13Z
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-kabita results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-kabita This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2327 | 1.0 | 460 | 0.6935 | | 0.4793 | 2.0 | 920 | 0.4709 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.1
vict0rsch/climateGAN
vict0rsch
2022-10-27T17:49:52Z
0
2
null
[ "Climate Change", "GAN", "Domain Adaptation", "en", "license:gpl-3.0", "region:us" ]
null
2022-10-24T13:17:28Z
--- language: - en tags: - Climate Change - GAN - Domain Adaptation license: gpl-3.0 title: ClimateGAN emoji: 🌎 colorFrom: blue colorTo: green sdk: gradio sdk_version: 3.6 app_file: app.py inference: true pinned: true --- # ClimateGAN: Raising Awareness about Climate Change by Generating Images of Floods This repository contains the code used to train the model presented in our **[paper](https://openreview.net/forum?id=EZNOb_uNpJk)**. It is not simply a presentation repository but the code we have used over the past 30 months to come to our final architecture. As such, you will find many scripts, classes, blocks and options which we actively use for our own development purposes but are not directly relevant to reproduce results or use pretrained weights. ![flood processing](images/flood.png) If you use this code, data or pre-trained weights, please cite our ICLR 2022 paper: ``` @inproceedings{schmidt2022climategan, title = {Climate{GAN}: Raising Climate Change Awareness by Generating Images of Floods}, author = {Victor Schmidt and Alexandra Luccioni and M{\'e}lisande Teng and Tianyu Zhang and Alexia Reynaud and Sunand Raghupathi and Gautier Cosne and Adrien Juraver and Vahe Vardanyan and Alex Hern{\'a}ndez-Garc{\'\i}a and Yoshua Bengio}, booktitle = {International Conference on Learning Representations}, year = {2022}, url = {https://openreview.net/forum?id=EZNOb_uNpJk} } ``` ## Using pre-trained weights from this Huggingface Space and Stable Diffusion In-painting <p align="center"> <strong>Huggingface ClimateGAN Space:</strong> <a href="https://huggingface.co/spaces/vict0rsch/climateGAN" target="_blank"> <img src="https://huggingface.co/vict0rsch/climateGAN/resolve/main/images/hf-cg.png"> </a> </p> 1. Download code and model ```bash git lfs install git clone https://huggingface.co/vict0rsch/climateGAN git lfs pull # optional if you don't have the weights ``` 2. Install requirements ``` pip install requirements.txt ``` 3. **Enable Stable Diffusion Inpainting** by visiting the model's card: https://huggingface.co/runwayml/stable-diffusion-inpainting **and** running `$ huggingface-cli login` 4. Run `$ python climategan_wrapper.py help` for usage instructions on how to infer on a folder's images. 5. Run `$ python app.py` to see the Gradio app. 1. To use Google Street View you'll need an API key and set the `GMAPS_API_KEY` environment variable. 2. To use Stable Diffusion if you can't run `$ huggingface-cli login` (on a Huggingface Space for instance) set the `HF_AUTH_TOKEN` env variable to a [Huggingface authorization token](https://huggingface.co/settings/tokens) 3. To change the UI without model overhead, set the `CG_DEV_MODE` environment variable to `true`. For a more fine-grained control on ClimateGAN's inferences, refer to `apply_events.py` (does not support Stable Diffusion painter) **Note:** you don't have control on the prompt by design because I disabled the safety checker. Fork this space/repo and do it yourself if you really need to change the prompt. At least [open a discussion](https://huggingface.co/spaces/vict0rsch/climateGAN/discussions). ## Using pre-trained weights from source In the paper, we present ClimateGAN as a solution to produce images of floods. It can actually do **more**: * reusing the segmentation map, we are able to isolate the sky, turn it red and in a few more steps create an image resembling the consequences of a wildfire on a neighboring area, similarly to the [California wildfires](https://www.google.com/search?q=california+wildfires+red+sky&source=lnms&tbm=isch&sa=X&ved=2ahUKEwisws-hx7zxAhXxyYUKHQyKBUwQ_AUoAXoECAEQBA&biw=1680&bih=917&dpr=2). * reusing the depth map, we can simulate the consequences of a smog event on an image, scaling the intensity of the filter by the distance of an object to the camera, as per [HazeRD](http://www2.ece.rochester.edu/~gsharma/papers/Zhang_ICIP2017_HazeRD.pdf) ![image of wildfire processing](images/wildfire.png) ![image of smog processing](images/smog.png) In this section we'll explain how to produce the `Painted Input` along with the Smog and Wildfire outputs of a pre-trained ClimateGAN model. ### Installation This repository and associated model have been developed using Python 3.8.2 and **Pytorch 1.7.0**. ```bash $ git clone git@github.com:cc-ai/climategan.git $ cd climategan $ pip install -r requirements-3.8.2.txt # or `requirements-any.txt` for other Python versions (not tested but expected to be fine) ``` Our pipeline uses [comet.ml](https://comet.ml) to log images. You don't *have* to use their services but we recommend you do as images can be uploaded on your workspace instead of being written to disk. If you want to use Comet, make sure you have the [appropriate configuration in place (API key and workspace at least)](https://www.comet.ml/docs/python-sdk/advanced/#non-interactive-setup) ### Inference 1. Download and unzip the weights [from this link](https://drive.google.com/u/0/uc?id=18OCUIy7JQ2Ow_-cC5xn_hhDn-Bp45N1K&export=download) (checkout [`gdown`](https://github.com/wkentaro/gdown) for a commandline interface) and put them in `config/` ``` $ pip install gdown $ mkdir config $ cd config $ gdown https://drive.google.com/u/0/uc?id=18OCUIy7JQ2Ow_-cC5xn_hhDn-Bp45N1K $ unzip release-github-v1.zip $ cd .. ``` 2. Run from the repo's root: 1. With `comet`: ```bash python apply_events.py --batch_size 4 --half --images_paths path/to/a/folder --resume_path config/model/masker --upload ``` 2. Without `comet` (and shortened args compared to the previous example): ```bash python apply_events.py -b 4 --half -i path/to/a/folder -r config/model/masker --output_path path/to/a/folder ``` The `apply_events.py` script has many options, for instance to use a different output size than the default systematic `640 x 640` pixels, look at the code or `python apply_events.py --help`. ## Training from scratch ClimateGAN is split in two main components: the Masker producing a binary mask of where water should go and the Painter generating water within this mask given an initial image's context. ### Configuration The code is structured to use `shared/trainer/defaults.yaml` as default configuration. There are 2 ways of overriding those for your purposes (without altering that file): 1. By providing an alternative configuration as command line argument `config=path/to/config.yaml` 1. The code will first load `shared/trainer/defaults.yaml` 2. *then* update the resulting dictionary with values read in the provided `config` argument. 3. The folder `config/` is NOT tracked by git so you would typically put them there 2. By overwriting specific arguments from the command-line like `python train.py data.loaders.batch_size=8` ### Data #### Masker ##### Real Images Because of copyrights issues we are not able to share the real images scrapped from the internet. You would have to do that yourself. In the `yaml` config file, the code expects a key pointing to a `json` file like `data.files.<train or val>.r: <path/to/a/json/file>`. This `json` file should be a list of dictionaries with tasks as keys and files as values. Example: ```json [ { "x": "path/to/a/real/image", "s": "path/to/a/segmentation_map", "d": "path/to/a/depth_map" }, ... ] ``` Following the [ADVENT](https://github.com/valeoai/ADVENT) procedure, only `x` should be required. We use `s` and `d` inferred from pre-trained models (DeepLab v3+ and MiDAS) to use those pseudo-labels in the first epochs of training (see `pseudo:` in the config file) ##### Simulated Images We share snapshots of the Virtual World we created in the [Mila-Simulated-Flood dataset](). You can download and unzip one water-level and then produce json files similar to that of the real data, with an additional key `"m": "path/to/a/ground_truth_sim_mask"`. Lastly, edit the config file: `data.files.<train or val>.s: <path/to/a/json/file>` #### Painter The painter expects input images and binary masks to train using the [GauGAN](https://github.com/NVlabs/SPADE) training procedure. Unfortunately we cannot share openly the collected data, but similarly as for the Masker's real data you would point to the data using a `json` file as: ```json [ { "x": "path/to/a/real/image", "m": "path/to/a/water_mask", }, ... ] ``` And put those files as values to `data.files.<train or val>.rf: <path/to/a/json/file>` in the configuration. ## Coding conventions * Tasks * `x` is an input image, in [-1, 1] * `s` is a segmentation target with `long` classes * `d` is a depth map target in R, may be actually `log(depth)` or `1/depth` * `m` is a binary mask with 1s where water is/should be * Domains * `r` is the *real* domain for the masker. Input images are real pictures of urban/suburban/rural areas * `s` is the *simulated* domain for the masker. Input images are taken from our Unity world * `rf` is the *real flooded* domain for the painter. Training images are pairs `(x, m)` of flooded scenes for which the water should be reconstructed, in the validation data input images are not flooded and we provide a manually labeled mask `m` * `kitti` is a special `s` domain to pre-train the masker on [Virtual Kitti 2](https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/) * it alters the `trainer.loaders` dict to select relevant data sources from `trainer.all_loaders` in `trainer.switch_data()`. The rest of the code is identical. * Flow * This describes the call stack for the trainers standard training procedure * `train()` * `run_epoch()` * `update_G()` * `zero_grad(G)` * `get_G_loss()` * `get_masker_loss()` * `masker_m_loss()` -> masking loss * `masker_s_loss()` -> segmentation loss * `masker_d_loss()` -> depth estimation loss * `get_painter_loss()` -> painter's loss * `g_loss.backward()` * `g_opt_step()` * `update_D()` * `zero_grad(D)` * `get_D_loss()` * painter's disc losses * `masker_m_loss()` -> masking AdvEnt disc loss * `masker_s_loss()` -> segmentation AdvEnt disc loss * `d_loss.backward()` * `d_opt_step()` * `update_learning_rates()` -> update learning rates according to schedules defined in `opts.gen.opt` and `opts.dis.opt` * `run_validation()` * compute val losses * `eval_images()` -> compute metrics * `log_comet_images()` -> compute and upload inferences * `save()`
OpenMatch/cocodr-base
OpenMatch
2022-10-27T16:20:16Z
11
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-26T05:51:29Z
This model has been pretrained on BEIR corpus without relevance-level supervision following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-base as the backbone with 110M hyperparameters. license: mit ---
mgb-dx-meetup/xlm-roberta-finetuned-sentiment
mgb-dx-meetup
2022-10-27T15:37:04Z
102
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:lewtun/autotrain-data-mgb-product-reviews-xlm-r", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T15:17:01Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - lewtun/autotrain-data-mgb-product-reviews-xlm-r co2_eq_emissions: emissions: 19.116414139555882 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1904264758 - CO2 Emissions (in grams): 19.1164 ## Validation Metrics - Loss: 1.021 - Accuracy: 0.563 - Macro F1: 0.555 - Micro F1: 0.563 - Weighted F1: 0.556 - Macro Precision: 0.555 - Micro Precision: 0.563 - Weighted Precision: 0.556 - Macro Recall: 0.562 - Micro Recall: 0.563 - Weighted Recall: 0.563 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Plaban81/vegetable-classifier
Plaban81
2022-10-27T15:35:01Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-27T15:34:48Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: vegetable-classifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8571428656578064 --- # vegetable-classifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Brinjal ![Brinjal](images/Brinjal.jpg) #### Cabbage ![Cabbage](images/Cabbage.jpg) #### Cauliflower ![Cauliflower](images/Cauliflower.jpg) #### Raddish ![Raddish](images/Raddish.jpg) #### Tomato ![Tomato](images/Tomato.jpg)
Sennodipoi/LayoutLMv3-FUNSD-ft
Sennodipoi
2022-10-27T15:29:16Z
5
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-23T08:14:07Z
LayoutLMv3 fine-tuned on the FUNSD dataset. Code and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm). Results obtained using seqeval in strict mode: | | Precision | Recall | F1-score | Variance (F1) | |--------------|-----------|--------|----------|---------------| | Answer | 0.90 | 0.91 | 0.90 | 3e-5 | | Header | 0.61 | 0.66 | 0.63 | 4e-4 | | Question | 0.88 | 0.87 | 0.88 | 1e-4 | | Micro avg | 0.87 | 0.88 | 0.87 | 3e-5 | | Macro avg | 0.79 | 0.82 | 0.80 | 3e-5 | | Weighted avg | 0.87 | 0.88 | 0.87 | 3e-5 |
JoAmps/trialz
JoAmps
2022-10-27T15:28:22Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T15:04:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: trialz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trialz This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 282 | 2.1342 | | 2.308 | 2.0 | 564 | 2.0320 | | 2.308 | 3.0 | 846 | 2.0148 | | 2.1411 | 4.0 | 1128 | 2.0076 | | 2.1411 | 5.0 | 1410 | 2.0043 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1
Sennodipoi/LayoutLMv3-kleisterNDA
Sennodipoi
2022-10-27T15:26:00Z
5
1
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-24T15:26:45Z
LayoutLMv3 fine-tuned on the Kleister-NDA dataset. Code (including pre-processing) and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm). Results obtained with seqeval in strict mode: | | Precision | Recall | F1-score | Variance (F1) | |----------------|-----------|--------|----------|---------------| | EFFECTIVE_DATE | 0.92 | 0.99 | 0.95 | 5e-5 | | JURISDICTION | 0.87 | 0.88 | 0.88 | 8e-6 | | PARTY | 0.92 | 0.99 | 0.95 | 5e-5 | | TERM | 1 | 1 | 1 | 0 | | Micro avg | 0.91 | 0.96 | 0.94 | 2e-5 | | Macro avg | 0.92 | 0.96 | 0.94 | 3e-7 | | Weighted avg | 0.91 | 0.96 | 0.94 | 2e-5 | Since I used the same segmentation strategy of the original paper i.e. using the labels to create segments, the scores are not directly comparable with the other LayoutLM versions.
pig4431/sst2_bert_3epoch
pig4431
2022-10-27T15:01:53Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T14:55:30Z
--- tags: - generated_from_trainer model-index: - name: sst2_bert_3epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2_bert_3epoch This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Shri3/q-Taxi-v3
Shri3
2022-10-27T14:36:11Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T14:36:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Shri3/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Shri3/q-FrozenLake-v1-4x4-noSlippery
Shri3
2022-10-27T14:33:14Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T14:07:26Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Shri3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
yeahrmek/arxiv-math-lean
yeahrmek
2022-10-27T14:05:48Z
0
0
null
[ "region:us" ]
null
2022-10-27T12:23:41Z
This is a BPE tokenizer based on "Salesforce/codegen-350M-mono". The tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not. We used ArXiv subset of The Pile dataset and proof steps from [lean-step-public](https://github.com/jesse-michael-han/lean-step-public) datasets to train the tokenizer.
JoAmps/GhPoliticsBERT
JoAmps
2022-10-27T13:15:06Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T10:55:13Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: GhPoliticsBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GhPoliticsBERT This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 9188 | 0.0000 | | 0.0 | 2.0 | 18376 | 0.0000 | | 0.0 | 3.0 | 27564 | 0.0000 | | 0.0 | 4.0 | 36752 | 0.0000 | | 0.0 | 5.0 | 45940 | 0.0000 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
kevinbror/bertbaseuncasedny
kevinbror
2022-10-27T12:13:45Z
61
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-27T12:13:00Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bertbaseuncasedny results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bertbaseuncasedny This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3901 - Train End Logits Accuracy: 0.8823 - Train Start Logits Accuracy: 0.8513 - Validation Loss: 1.2123 - Validation End Logits Accuracy: 0.7291 - Validation Start Logits Accuracy: 0.6977 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.2597 | 0.6683 | 0.6277 | 1.0151 | 0.7214 | 0.6860 | 0 | | 0.7699 | 0.7820 | 0.7427 | 1.0062 | 0.7342 | 0.6996 | 1 | | 0.5343 | 0.8425 | 0.8064 | 1.1162 | 0.7321 | 0.7010 | 2 | | 0.3901 | 0.8823 | 0.8513 | 1.2123 | 0.7291 | 0.6977 | 3 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
Rijgersberg/whisper-small-fy-NL
Rijgersberg
2022-10-27T08:50:21Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-25T22:17:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: whisper-small-fy-NL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-fy-NL This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [CommonVoice 11 `fy-NL` (West-Frisian)](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/fy-NL/train) dataset. It achieves the following results on the evaluation set: - Loss: 0.5276 - Wer: 0.2919 The Wer before finetuning was 1.0622. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | | 0 | 0 | | 1.0622| | 0.9177 | 1.0 | 211 | 0.8145 | 0.3450 | | 0.5807 | 2.0 | 422 | 0.7113 | 0.3640 | | 0.2884 | 3.0 | 633 | 0.5276 | 0.2919 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
arshiya20/epochs-finetuned-squad
arshiya20
2022-10-27T07:44:45Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-10-27T05:38:23Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: epochs-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # epochs-finetuned-squad This model was trained from scratch on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7553 | 1.0 | 5533 | 1.2460 | | 0.739 | 2.0 | 11066 | 1.2609 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
sd-concepts-library/pintu
sd-concepts-library
2022-10-27T06:49:30Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-27T06:49:13Z
--- license: mit --- ### pintu on Stable Diffusion This is the `<pintu-dog>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<pintu-dog> 0](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG_20211119_102937.jpg) ![<pintu-dog> 1](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG_20221026_091617.jpg) ![<pintu-dog> 2](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG_20221026_091644.jpg) ![<pintu-dog> 3](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG-20210609-WA0002.jpeg) ![<pintu-dog> 4](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG-20220612-WA0017.jpg) ![<pintu-dog> 5](https://huggingface.co/sd-concepts-library/pintu/resolve/main/concept_images/IMG-20220612-WA0006.jpg)
teacookies/autotrain-27102022-cert1-1899464570
teacookies
2022-10-27T06:29:42Z
13
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-27102022-cert1", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-27T06:19:22Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-27102022-cert1 co2_eq_emissions: emissions: 16.254745105263574 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1899464570 - CO2 Emissions (in grams): 16.2547 ## Validation Metrics - Loss: 0.004 - Accuracy: 0.999 - Precision: 0.972 - Recall: 0.979 - F1: 0.975 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-27102022-cert1-1899464570 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/ferret_gf
huggingtweets
2022-10-27T06:27:00Z
5
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-27T06:26:17Z
--- language: en thumbnail: http://www.huggingtweets.com/ferret_gf/1666852015981/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1583569492789153799/vJ1FEmHw_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">alex</div> <div style="text-align: center; font-size: 14px;">@ferret_gf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from alex. | Data | alex | | --- | --- | | Tweets downloaded | 703 | | Retweets | 163 | | Short tweets | 183 | | Tweets kept | 357 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/95pl7wzb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ferret_gf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2k6rhew5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2k6rhew5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ferret_gf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/schizo_freq
huggingtweets
2022-10-27T03:52:41Z
105
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-09T17:50:33Z
--- language: en thumbnail: http://www.huggingtweets.com/schizo_freq/1666842754202/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1582126821025382400/PZjx83du_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lukas (computer)</div> <div style="text-align: center; font-size: 14px;">@schizo_freq</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lukas (computer). | Data | Lukas (computer) | | --- | --- | | Tweets downloaded | 3234 | | Retweets | 481 | | Short tweets | 324 | | Tweets kept | 2429 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11autkzl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @schizo_freq's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2km4y95n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2km4y95n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/schizo_freq') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
PKR/swin-tiny-patch4-window7-224-finetuned-eurosat
PKR
2022-10-27T03:21:42Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-27T02:53:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9814814814814815 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0593 - Accuracy: 0.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2731 | 1.0 | 190 | 0.1128 | 0.9637 | | 0.1862 | 2.0 | 380 | 0.0759 | 0.9759 | | 0.1409 | 3.0 | 570 | 0.0593 | 0.9815 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
roborovski/ddpm-butterflies-128
roborovski
2022-10-27T02:31:44Z
8
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-26T22:44:56Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/roborovski/ddpm-butterflies-128/tensorboard?#scalars)
CharlieP/t5-small-nlpfinalproject-xsum
CharlieP
2022-10-27T00:12:48Z
9
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-26T15:42:09Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: CharlieP/t5-small-nlpfinalproject-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # CharlieP/t5-small-nlpfinalproject-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2391 - Validation Loss: 3.0511 - Train Rouge1: 21.2434 - Train Rouge2: 4.0808 - Train Rougel: 16.6836 - Train Rougelsum: 16.6460 - Train Gen Len: 18.42 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 3.8204 | 3.2757 | 18.2829 | 2.7616 | 14.7101 | 14.7047 | 18.59 | 0 | | 3.4646 | 3.1560 | 20.4371 | 3.6903 | 16.0587 | 16.0790 | 18.35 | 1 | | 3.3630 | 3.1028 | 20.7907 | 3.9282 | 15.9696 | 15.8916 | 18.42 | 2 | | 3.2904 | 3.0713 | 21.6980 | 4.3218 | 16.7261 | 16.6776 | 18.42 | 3 | | 3.2391 | 3.0511 | 21.2434 | 4.0808 | 16.6836 | 16.6460 | 18.42 | 4 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
sd-concepts-library/anime-background-style
sd-concepts-library
2022-10-26T23:48:27Z
0
7
null
[ "license:mit", "region:us" ]
null
2022-10-26T23:39:03Z
--- license: mit --- ### Anime Background Style on Stable Diffusion This is the `<anime-background-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<anime-background-style> 0](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/5.jpeg) ![<anime-background-style> 1](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/8.jpeg) ![<anime-background-style> 2](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/3.jpeg) ![<anime-background-style> 3](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/0.jpeg) ![<anime-background-style> 4](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/9.jpeg) ![<anime-background-style> 5](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/6.jpeg) ![<anime-background-style> 6](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/2.jpeg) ![<anime-background-style> 7](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/1.jpeg) ![<anime-background-style> 8](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/4.jpeg) ![<anime-background-style> 9](https://huggingface.co/sd-concepts-library/anime-background-style/resolve/main/concept_images/7.jpeg) Here are images generated with this style: ![a suburban street in the style of <anime-background-style>](https://i.imgur.com/S774UmL.png) ![a public pool in the style of <anime-background-style>](https://i.imgur.com/d1Z4V3K.png) ![a lush jungle in the style of <anime-background-style>](https://i.imgur.com/06vhfIH.png) This style does not produce good results as most of the training images were too small. I'll likely train it again with bigger ones.
musika/musika_misc
musika
2022-10-26T22:48:07Z
0
1
null
[ "audio", "music", "generation", "tensorflow", "arxiv:2208.08706", "license:mit", "region:us" ]
null
2022-10-26T22:46:21Z
--- license: mit tags: - audio - music - generation - tensorflow --- # Musika Model: musika_misc ## Model provided by: marcop Pretrained musika_misc model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). ## How to use You can generate music from this pretrained musika_misc model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). ### Model description This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.
sd-concepts-library/kentaro-miura
sd-concepts-library
2022-10-26T22:24:04Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-10-26T22:23:57Z
--- license: mit --- ### Kentaro Miura on Stable Diffusion This is the `<kentaro-miura>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<kentaro-miura> 0](https://huggingface.co/sd-concepts-library/kentaro-miura/resolve/main/concept_images/3.jpeg) ![<kentaro-miura> 1](https://huggingface.co/sd-concepts-library/kentaro-miura/resolve/main/concept_images/0.jpeg) ![<kentaro-miura> 2](https://huggingface.co/sd-concepts-library/kentaro-miura/resolve/main/concept_images/2.jpeg) ![<kentaro-miura> 3](https://huggingface.co/sd-concepts-library/kentaro-miura/resolve/main/concept_images/1.jpeg) ![<kentaro-miura> 4](https://huggingface.co/sd-concepts-library/kentaro-miura/resolve/main/concept_images/4.jpeg)
huggingtweets/the_boolaidman
huggingtweets
2022-10-26T21:55:47Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T17:15:53Z
--- language: en thumbnail: http://www.huggingtweets.com/the_boolaidman/1666821342474/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1528444052034789378/E1BRWZyE_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">theboghog</div> <div style="text-align: center; font-size: 14px;">@the_boolaidman</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from theboghog. | Data | theboghog | | --- | --- | | Tweets downloaded | 184 | | Retweets | 44 | | Short tweets | 32 | | Tweets kept | 108 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lez3uo4l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_boolaidman's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34ufbard) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34ufbard/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/the_boolaidman') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/nearcyan
huggingtweets
2022-10-26T21:10:01Z
8
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T21:08:44Z
--- language: en thumbnail: http://www.huggingtweets.com/nearcyan/1666818597137/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1446575702439043077/kNKnkoyI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">nearcyan</div> <div style="text-align: center; font-size: 14px;">@nearcyan</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from nearcyan. | Data | nearcyan | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 132 | | Short tweets | 136 | | Tweets kept | 2978 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ilun9vdk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nearcyan's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16w8mubo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16w8mubo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nearcyan') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Kristijan/gpt2_wt103-40m_12-layer
Kristijan
2022-10-26T20:55:16Z
3
0
pytorch
[ "pytorch", "gpt2", "language-model", "transformer", "wikitext-103", "en", "arxiv:2210.13569", "model-index", "region:us" ]
null
2022-10-26T17:46:18Z
--- language: - en library_name: pytorch tags: - language-model - gpt2 - transformer - wikitext-103 model-index: - name: gpt2_wt103-40m_12-layer results: - task: type: language-modeling dataset: type: wikitext name: Wikitext-103 metrics: - type: perplexity value: 40.3 --- # Model description paper: [Characterizing Verbatim Short-Term Memory in Neural Language Models](https://arxiv.org/abs/2210.13569) This is a gpt2-small-like decoder-only transformer model trained on a 40M token subset of the [wikitext-103 dataset](https://paperswithcode.com/dataset/wikitext-103). # Usage You can download and load the model as follows: ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained("Kristijan/gpt2_wt103-40m_12-layer") ``` Alternatively, if you've downloaded the checkpoint files in this repository, you could also do: ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained(path_to_folder_with_checkpoint_files) ``` To tokenize your text for this model, you should use the [tokenizer trained on Wikitext-103](https://huggingface.co/Kristijan/wikitext-103-tokenizer) # Intended uses This checkpoint is intended for research purposes, for example those interested in studying the behavior of transformer language models trained on smaller datasets.
GhifSmile/mT5_multilingual_XLSum-finetuned-indosum
GhifSmile
2022-10-26T20:49:59Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-26T15:43:40Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: mT5_multilingual_XLSum-finetuned-indosum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5_multilingual_XLSum-finetuned-indosum This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5512 - Rouge1: 0.3819 - Rouge2: 0.3102 - Rougel: 0.3529 - Rougelsum: 0.3687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.8183 | 1.0 | 7131 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 | | 1.8191 | 2.0 | 14262 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 | | 1.8197 | 3.0 | 21393 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Karelito00/beit-base-patch16-224-pt22k-ft22k-finetuned-mnist
Karelito00
2022-10-26T19:25:37Z
49
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:mnist", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-26T15:25:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - mnist metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-ft22k-finetuned-mnist results: - task: name: Image Classification type: image-classification dataset: name: mnist type: mnist config: mnist split: train args: mnist metrics: - name: Accuracy type: accuracy value: 0.9935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-ft22k-finetuned-mnist This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the mnist dataset. It achieves the following results on the evaluation set: - Loss: 0.0202 - Accuracy: 0.9935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3376 | 1.0 | 937 | 0.0446 | 0.9855 | | 0.318 | 2.0 | 1874 | 0.0262 | 0.9916 | | 0.2374 | 3.0 | 2811 | 0.0202 | 0.9935 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/simerino1
huggingtweets
2022-10-26T19:03:41Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T19:02:08Z
--- language: en thumbnail: http://www.huggingtweets.com/simerino1/1666811016675/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1174133652399300608/3UF7GOrK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">computer</div> <div style="text-align: center; font-size: 14px;">@simerino1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from computer. | Data | computer | | --- | --- | | Tweets downloaded | 980 | | Retweets | 366 | | Short tweets | 96 | | Tweets kept | 518 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/356xy36h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @simerino1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1eld4xfg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1eld4xfg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/simerino1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
PraveenKishore/dqn-SpaceInvadersNoFrameskip-v4
PraveenKishore
2022-10-26T18:07:45Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-26T18:07:09Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 626.50 +/- 127.69 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PraveenKishore ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
asparius/big-balanced-combined-bert
asparius
2022-10-26T17:56:54Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T19:41:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: big-balanced-combined-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # big-balanced-combined-bert This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2872 - Accuracy: 0.9055 - F1: 0.9061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Stamford-Maxwell/ppo-LunarLander-v2
Stamford-Maxwell
2022-10-26T17:35:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-26T16:01:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 217.69 +/- 10.37 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/nuclearkatie
huggingtweets
2022-10-26T16:33:35Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T16:28:44Z
--- language: en thumbnail: http://www.huggingtweets.com/nuclearkatie/1666801970584/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334988663629942789/nDPoGclx_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Katie 🎃Boo👻-mah</div> <div style="text-align: center; font-size: 14px;">@nuclearkatie</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Katie 🎃Boo👻-mah. | Data | Katie 🎃Boo👻-mah | | --- | --- | | Tweets downloaded | 3205 | | Retweets | 1130 | | Short tweets | 225 | | Tweets kept | 1850 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vtpuc3cq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nuclearkatie's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vpu6vsq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vpu6vsq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nuclearkatie') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Jsjjdnwjskxij6/Ffg
Jsjjdnwjskxij6
2022-10-26T15:24:13Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-10-26T15:24:13Z
--- license: bigscience-bloom-rail-1.0 ---
pig4431/rtm_ALBERT_5E
pig4431
2022-10-26T15:04:14Z
5
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "dataset:rotten_tomatoes", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T15:03:22Z
--- tags: - generated_from_trainer datasets: - rotten_tomatoes model-index: - name: model_output_dir results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output_dir This model was trained from scratch on the rotten_tomatoes dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
YumaSaito/distilbert-base-uncased-finetuned-emotion
YumaSaito
2022-10-26T15:03:55Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-23T14:15:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261092845869646 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2181 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8618 | 1.0 | 250 | 0.3206 | 0.903 | 0.8990 | | 0.2549 | 2.0 | 500 | 0.2181 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
judaschrist/ddpm-butterflies-128
judaschrist
2022-10-26T14:30:42Z
4
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:json", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-25T15:52:48Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: json metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `json` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/judaschrist/ddpm-butterflies-128/tensorboard?#scalars)
mrm8488/codebert-base-finetuned-code-ner-15e
mrm8488
2022-10-26T13:42:00Z
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T11:57:15Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: codebert-base-finetuned-code-ner-15e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert-base-finetuned-code-ner-15e This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3831 - Precision: 0.6363 - Recall: 0.6494 - F1: 0.6428 - Accuracy: 0.9197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 191 | 0.4566 | 0.5021 | 0.4220 | 0.4585 | 0.8827 | | No log | 2.0 | 382 | 0.3756 | 0.5699 | 0.5764 | 0.5731 | 0.9043 | | 0.5133 | 3.0 | 573 | 0.3605 | 0.6001 | 0.5767 | 0.5882 | 0.9093 | | 0.5133 | 4.0 | 764 | 0.3500 | 0.6130 | 0.6130 | 0.6130 | 0.9153 | | 0.5133 | 5.0 | 955 | 0.3501 | 0.6337 | 0.6172 | 0.6254 | 0.9178 | | 0.2203 | 6.0 | 1146 | 0.3645 | 0.6250 | 0.6352 | 0.6300 | 0.9163 | | 0.2203 | 7.0 | 1337 | 0.3488 | 0.6263 | 0.6422 | 0.6341 | 0.9189 | | 0.1457 | 8.0 | 1528 | 0.3575 | 0.6372 | 0.6397 | 0.6384 | 0.9194 | | 0.1457 | 9.0 | 1719 | 0.3662 | 0.6406 | 0.6343 | 0.6375 | 0.9189 | | 0.1457 | 10.0 | 1910 | 0.3613 | 0.6374 | 0.6473 | 0.6423 | 0.9201 | | 0.107 | 11.0 | 2101 | 0.3716 | 0.6329 | 0.6544 | 0.6435 | 0.9197 | | 0.107 | 12.0 | 2292 | 0.3754 | 0.6328 | 0.6487 | 0.6406 | 0.9193 | | 0.107 | 13.0 | 2483 | 0.3826 | 0.6395 | 0.6490 | 0.6443 | 0.9204 | | 0.0863 | 14.0 | 2674 | 0.3821 | 0.6368 | 0.6535 | 0.6451 | 0.9200 | | 0.0863 | 15.0 | 2865 | 0.3831 | 0.6363 | 0.6494 | 0.6428 | 0.9197 | ### Evaluation results | | Algorithm | Application | Class | Code_Block | Data_Structure | Data_Type | Device | Error_Name | File_Name | File_Type | Function | HTML_XML_Tag | Keyboard_IP | Language | Library | Operating_System | Output_Block | User_Interface_Element | User_Name | Value | Variable | Version | Website | overall_precision | overall_recall | overall_f1 | overall_accuracy | |:----------|------------:|--------------:|------------:|-------------:|-----------------:|------------:|----------:|-------------:|------------:|------------:|-----------:|---------------:|--------------:|-----------:|-----------:|-------------------:|---------------:|-------------------------:|------------:|-----------:|-----------:|-----------:|----------:|--------------------:|-----------------:|-------------:|-------------------:| | precision | 0 | 0.619835 | 0.680851 | 0.455629 | 0.813187 | 0.592593 | 0.395062 | 0.181818 | 0.800505 | 0.775956 | 0.757664 | 0.585366 | 0.333333 | 0.689769 | 0.61807 | 0.769231 | 0.0212766 | 0.542214 | 0.4375 | 0.370236 | 0.560479 | 0.883721 | 0.382353 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | recall | 0 | 0.677711 | 0.696864 | 0.494253 | 0.840909 | 0.8 | 0.533333 | 0.333333 | 0.794486 | 0.628319 | 0.631387 | 0.470588 | 0.0169492 | 0.81323 | 0.546279 | 0.843373 | 0.04 | 0.653846 | 0.518519 | 0.52987 | 0.54482 | 0.914089 | 0.270833 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | f1 | 0 | 0.647482 | 0.688765 | 0.474156 | 0.826816 | 0.680851 | 0.453901 | 0.235294 | 0.797484 | 0.694377 | 0.688786 | 0.521739 | 0.0322581 | 0.746429 | 0.579961 | 0.804598 | 0.0277778 | 0.592821 | 0.474576 | 0.435897 | 0.552538 | 0.898649 | 0.317073 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | number | 31 | 664 | 1148 | 696 | 264 | 120 | 60 | 30 | 798 | 226 | 822 | 102 | 59 | 257 | 551 | 83 | 25 | 442 | 54 | 385 | 859 | 291 | 48 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Linus4Lyf/test-food
Linus4Lyf
2022-10-26T13:34:09Z
24
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-26T13:33:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 10 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 10, "warmup_steps": 1, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
KGsteven/distilbert-base-uncased-finetuned-cola
KGsteven
2022-10-26T12:36:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T11:25:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3038 - Matthews Correlation: 0.9198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 1.2169 | 1.0 | 626 | 0.6782 | 0.8605 | | 0.5513 | 2.0 | 1252 | 0.4085 | 0.8998 | | 0.343 | 3.0 | 1878 | 0.3346 | 0.9122 | | 0.1642 | 4.0 | 2504 | 0.3106 | 0.9165 | | 0.1216 | 5.0 | 3130 | 0.3038 | 0.9198 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.1
huggingtweets/doaenel
huggingtweets
2022-10-26T12:29:27Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-30T20:24:02Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1469646540612509701/x4eJRlkK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Dantes</div> <div style="text-align: center; font-size: 14px;">@doaenel</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Dantes. | Data | Dantes | | --- | --- | | Tweets downloaded | 2609 | | Retweets | 29 | | Short tweets | 464 | | Tweets kept | 2116 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sbwdgoz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doaenel's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8u23yy7u) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8u23yy7u/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/doaenel') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
doodlevelyn/bert-base-NER
doodlevelyn
2022-10-26T12:28:21Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T07:36:50Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-NER results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-NER This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4459 - Precision: 0.3972 - Recall: 0.2378 - F1: 0.2975 - Accuracy: 0.9571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0006 | 1.0 | 7365 | 0.3996 | 0.4030 | 0.2121 | 0.2780 | 0.9573 | | 0.0001 | 2.0 | 14730 | 0.3969 | 0.3798 | 0.2371 | 0.2920 | 0.9570 | | 0.0 | 3.0 | 22095 | 0.4459 | 0.3972 | 0.2378 | 0.2975 | 0.9571 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
studiolike/caps
studiolike
2022-10-26T12:04:28Z
13
0
tf-keras
[ "tf-keras", "ocr", "computer vision", "object detection", "image-to-text", "license:cc0-1.0", "region:us" ]
image-to-text
2022-10-22T05:21:34Z
--- tags: - ocr - computer vision - object detection - image-to-text license: - cc0-1.0 --- ## Keras Implementation of OCR model for reading captcha 🤖🦹🏻