modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 18:52:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 18:52:05
card
stringlengths
11
1.01M
lixiangchun/imagenet-swav-resnet50w2
lixiangchun
2022-10-28T04:13:37Z
0
0
tf-keras
[ "tf-keras", "onnx", "region:us" ]
null
2022-10-20T04:06:01Z
```python import trace_layer2 as models import torch x=torch.randn(1, 3, 224, 224) state_dict = torch.load('swav_imagenet_layer2.pt', map_location='cpu') model = models.resnet50w2() model.load_state_dict(state_dict) model.eval() feature = model(x) traced_model = torch.jit.load('traced_swav_imagenet_layer2.pt', map_location='cpu') traced_model.eval() feature = traced_model(x) ```
agungbesti/house
agungbesti
2022-10-28T02:59:23Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-10-28T02:53:02Z
--- title: Protas emoji: 🏃 colorFrom: yellow colorTo: pink sdk: gradio app_file: app.py pinned: false license: apache-2.0 --- # Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio` or `streamlit` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list.
helloway/simple
helloway
2022-10-28T02:00:19Z
0
0
null
[ "audio-classification", "license:apache-2.0", "region:us" ]
audio-classification
2022-10-28T01:51:37Z
--- license: apache-2.0 tags: - audio-classification ---
Kolgrima/Luna
Kolgrima
2022-10-28T01:39:20Z
0
0
null
[ "license:openrail", "region:us" ]
null
2022-10-27T23:48:49Z
--- license: openrail --- ## Model of Evanna Lynch as Luna Lovegood If you've ever tried to create an image of Luna Lovegood from the movies, you'll have noticed Stable Diffusion is not good at this! That's where this model comes in. This has been trained on 38 images of Evanna Lynch as Luna Lovegood. ## Usage Simply use the keyword "**Luna**" anywhere in your prompt. ### Output Examples Each image has embedded data that can be read from the PNG info tab in Stable diffusion Web UI. ![portrait painting of luna.png](https://s3.amazonaws.com/moonup/production/uploads/1666916375858-63192b8247a84df2a5def800.png) ![portrait painting of luna 2.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632892-63192b8247a84df2a5def800.png) ![Neon, Luna.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632951-63192b8247a84df2a5def800.png) ![stylized luna.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632715-63192b8247a84df2a5def800.png) ![Comic of luna.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632967-63192b8247a84df2a5def800.png) ![portrait of luna drinking tea.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632516-63192b8247a84df2a5def800.png) ![two tone Luna Comic.png](https://s3.amazonaws.com/moonup/production/uploads/1666920633065-63192b8247a84df2a5def800.png) ![Ink Luna.png](https://s3.amazonaws.com/moonup/production/uploads/1666920632939-63192b8247a84df2a5def800.png) ![luna, black and white, comic.png](https://s3.amazonaws.com/moonup/production/uploads/1666920633118-63192b8247a84df2a5def800.png) ![luna as a cute pixar character.png](https://s3.amazonaws.com/moonup/production/uploads/1666920631640-63192b8247a84df2a5def800.png)
skang/distilbert-base-uncased-finetuned-imdb
skang
2022-10-28T01:38:56Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T01:30:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.76 | 1.0 | 157 | 0.6640 | | 0.688 | 2.0 | 314 | 0.6581 | | 0.6768 | 3.0 | 471 | 0.6604 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
ByungjunKim/distilbert-base-uncased-finetuned-imdb
ByungjunKim
2022-10-28T01:36:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T01:27:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.76 | 1.0 | 157 | 0.6640 | | 0.688 | 2.0 | 314 | 0.6581 | | 0.6768 | 3.0 | 471 | 0.6604 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/revmaxxing
huggingtweets
2022-10-28T01:23:51Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-27T23:49:45Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1578729528695963649/mmiLKGp1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rev 🇷🇺 🌾 🛞</div> <div style="text-align: center; font-size: 14px;">@revmaxxing</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rev 🇷🇺 🌾 🛞. | Data | Rev 🇷🇺 🌾 🛞 | | --- | --- | | Tweets downloaded | 3097 | | Retweets | 241 | | Short tweets | 416 | | Tweets kept | 2440 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nfmh3no/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @revmaxxing's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zust2rmi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zust2rmi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/revmaxxing') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Rogerooo/bordaloii
Rogerooo
2022-10-28T00:57:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-10-28T00:49:17Z
--- license: creativeml-openrail-m ---
OpenMatch/cocodr-large-msmarco-idro-only
OpenMatch
2022-10-28T00:45:35Z
105
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T00:42:33Z
--- license: mit --- This model has been pretrained on MS MARCO corpus and then finetuned on MS MARCO training data with implicit distributionally robust optimization (iDRO), following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
caffsean/bert-base-cased-deep-ritmo
caffsean
2022-10-28T00:17:00Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T03:19:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-deep-ritmo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-deep-ritmo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.0463 | 1.0 | 1875 | 3.7428 | | 3.3393 | 2.0 | 3750 | 3.0259 | | 2.7435 | 3.0 | 5625 | 2.5837 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
allenai/scirepeval_adapters_rgn
allenai
2022-10-28T00:05:08Z
6
0
adapter-transformers
[ "adapter-transformers", "adapterhub:scirepeval/regression", "bert", "dataset:allenai/scirepeval", "region:us" ]
null
2022-10-28T00:04:59Z
--- tags: - adapterhub:scirepeval/regression - adapter-transformers - bert datasets: - allenai/scirepeval --- # Adapter `allenai/scirepeval_adapters_rgn` for malteos/scincl An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/regression](https://adapterhub.ml/explore/scirepeval/regression/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("malteos/scincl") adapter_name = model.load_adapter("allenai/scirepeval_adapters_rgn", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
OpenMatch/condenser-large
OpenMatch
2022-10-28T00:04:23Z
25
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T23:44:05Z
--- license: mit --- This model has been pretrained on BookCorpus and English Wikipedia following the approach described in the paper **Condenser: a Pre-training Architecture for Dense Retrieval**. The model can be used to reproduce the experimental results within the GitHub repository https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
OpenMatch/co-condenser-large
OpenMatch
2022-10-28T00:03:42Z
33
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T23:56:37Z
--- license: mit --- This model has been pretrained on MS MARCO following the approach described in the paper **Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval**. The model can be used to reproduce the experimental results within the GitHub repository https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
allenai/scirepeval_adapters_clf
allenai
2022-10-28T00:03:35Z
14
0
adapter-transformers
[ "adapter-transformers", "adapterhub:scirepeval/classification", "bert", "dataset:allenai/scirepeval", "region:us" ]
null
2022-10-28T00:03:26Z
--- tags: - adapterhub:scirepeval/classification - adapter-transformers - bert datasets: - allenai/scirepeval --- # Adapter `allenai/scirepeval_adapters_clf` for malteos/scincl An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/classification](https://adapterhub.ml/explore/scirepeval/classification/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("malteos/scincl") adapter_name = model.load_adapter("allenai/scirepeval_adapters_clf", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
rajistics/setfit-model
rajistics
2022-10-27T23:47:04Z
2
1
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-27T23:46:48Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ViktorDo/SciBERT-POWO_Climber_Finetuned
ViktorDo
2022-10-27T22:39:38Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T21:19:57Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-POWO_Climber_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-POWO_Climber_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1033 | 1.0 | 2133 | 0.1151 | | 0.0853 | 2.0 | 4266 | 0.1058 | | 0.0792 | 3.0 | 6399 | 0.1086 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
JamesH/Translation_en_to_fr_project
JamesH
2022-10-27T21:52:09Z
4
1
transformers
[ "transformers", "pytorch", "autotrain", "translation", "en", "fr", "dataset:JamesH/autotrain-data-second-project-en2fr", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
translation
2022-10-27T19:57:24Z
--- tags: - autotrain - translation language: - en - fr datasets: - JamesH/autotrain-data-second-project-en2fr co2_eq_emissions: emissions: 0.6863820434350988 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 1907464829 - CO2 Emissions (in grams): 0.6864 ## Validation Metrics - Loss: 1.117 - SacreBLEU: 16.546 - Gen len: 14.511
wavymulder/zelda-diffusion-HN
wavymulder
2022-10-27T21:32:27Z
0
18
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-10-25T01:06:42Z
--- license: creativeml-openrail-m --- **Zelda Diffusion - Hypernet** [*DOWNLOAD LINK*](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/zeldaBOTW.pt) - This is a hypernet trained on screenshots of Princess Zelda from BOTW ![Basic Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/zeldaNet-example_websize.jpg) Here's a random batch of 9 images to show the hypernet uncherrypicked. The prompt is "anime princess zelda volumetric lighting" and the negative prompt is "cel render 3d animation" ![Random 9](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/batchof9_websize.jpg) and [a link to more](https://i.imgur.com/NixQGid.jpg) --- Tips: You'll want to adjust the hypernetwork strength depending on what style you're trying to put Zelda into. I usually keep it at 80% strength and go from there. This hypernetwork helps make Zelda look more like the BOTW Zelda. You still have to prompt for what you want. Extra weight might sometimes need to be applied to get her to wear costumes. You may also have luck putting her name closer to the end of the prompt than you normally would. Since the hypernetwork is trained on screenshots from the videogame, it imparts a heavy Cel Shading effect [(Example here)](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/00108-920950.png). You can minimize this by negative prompting "cel". I believe every example posted here uses this. The hypernet can be used either with very simple prompting, as shown above, or a prompt of your favourite artists. ![Artists Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/anime_example.jpg) You can put this hypernet on top of different models to create some really cool Zeldas, such as this one made with [Nitrosocke](https://huggingface.co/nitrosocke)'s [Modern Disney Model](https://huggingface.co/nitrosocke/modern-disney-diffusion). ![Modern Disney Example](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/modernDisney%20example.png)
RUCAIBox/elmer
RUCAIBox
2022-10-27T21:30:13Z
4
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "text-generation", "non-autoregressive-generation", "early-exit", "en", "arxiv:2210.13304", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-27T21:14:19Z
--- license: apache-2.0 language: - en tags: - text-generation - non-autoregressive-generation - early-exit --- # ELMER The ELMER model was proposed in [**ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation**](https://arxiv.org/abs/2210.13304) by Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/ELMER](https://github.com/RUCAIBox/ELMER). ## Model Description ELMER is an efficient and effective PLM for NAR text generation, which generates tokens at different layers by leveraging the early exit technique. The architecture of ELMER is a variant of the standard Transformer encoder-decoder and poses three technical contributions: 1. For decoder, we replace the original masked multi-head attention with bi-directional multi-head attention akin to the encoder. Therefore, ELMER dynamically adjusts the output length by emitting an end token "[EOS]" at any position. 2. Leveraging early exit, ELMER injects "off-ramps" at each decoder layer, which make predictions with intermediate hidden states. If ELMER exits at the $l$-th layer, we copy the $l$-th hidden states to the subsequent layers. 3. ELMER utilizes a novel pre-training objective, layer permutation language modeling (LPLM), to pre-train on the large-scale corpus. LPLM permutes the exit layer for each token from 1 to the maximum layer $L$. ## Examples To fine-tune ELMER on non-autoregressive text generation: ```python >>> from transformers import BartTokenizer as ElmerTokenizer >>> from transformers import BartForConditionalGeneration as ElmerForConditionalGeneration >>> tokenizer = ElmerTokenizer.from_pretrained("RUCAIBox/elmer") >>> model = ElmerForConditionalGeneration.from_pretrained("RUCAIBox/elmer") ``` ## Citation ```bibtex @article{lijunyi2022elmer, title={ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation}, author={Li, Junyi and Tang, Tianyi and Zhao, Wayne Xin and Nie, Jian-Yun and Wen, Ji-Rong}, booktitle={EMNLP 2022}, year={2022} } ```
OpenMatch/cocodr-base-msmarco-idro-only
OpenMatch
2022-10-27T21:26:19Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-27T21:21:56Z
--- license: mit --- This model has been pretrained on MS MARCO corpus and then finetuned on MS MARCO training data with implicit distributionally robust optimization (iDRO), following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-base as the backbone with 110M hyperparameters.
Phantasion/phaninc
Phantasion
2022-10-27T21:03:33Z
0
1
null
[ "region:us" ]
null
2022-10-27T20:18:49Z
![robot dog](https://i.imgur.com/rLq8IdH.png "robot dog") Phaninc is a model based on my cyberpunk tumblr blog phantasyinc. One thing that has frustrated me with AI art is the generic quality of prompting for cyberpunk imagery, so I went through my blog and curated a dataset for 25 new keywords to get the results I desire. I have been heavily inspired by the work of nousr on robodiffusion whose model gave me a lot of results I love. I have utilised the new FAST dreambooth method, and run it at 20000 steps on 684 images (around 800 steps per concept). At the time of writing the model is still training but I thought I would use my training time to summarise my intent with each keyword. I expect there to be problems and some of my experiments to not pan out so well, but I thought I would share. *Post training update: the entire model is contaminated, most prompts are gonna churn out cyberpunk work, but the keywords are still good against one another and work as desired, and the base model has had some interesting lessons taught to it.* **phanborg** This set was the first to be tested, it is a combination of portraits of cyborgs much like phancyborg and phandroid. The difference between the three is that phanborg uses a combination of images with the face covered and uncovered by machinery, while phancyborg uses only uncovered cyborgs and phandroid only covered cyborgs. The images used in all three are entirely different so that I can play with a diversity of trained features with my keywords. **phanbrutal** Images I consider a combination of cyberpunk and brutalism. **phanbw** This one is one of my more experimental keywords, utilising monochrome cyberpunk images I find quite striking in black and white. However apart from sticking to a cyberpunk theme, there is no consistent subject matter and may just end up being a generic monochrome keyword. **phancircle** another experimental keyword, this keyword utilises a selection of architectural, textural and 3d design images with circles and spheres as a recurring motif. My hope is this keyword will help provide a cyberpunk texture to other prompts with a circular motif. **phancity** Bleak futuristic cityscapes, but like phanbw this experiment may fail due to being too varied subject matter. **phanconcrete** concrete, images of architecture with mostly concrete finishes, might be overkill with phanbrutal above, but I like that there will still be nuanced differences to play with. **phanconsole** A command centre needs buttons to beep and switches to boop, this keyword is all about screens and buttons. **phancorridor** images of spaceship corridors and facilities to provide a more futuristic interior design. **phancyborg** phancyborg is an image selection of cyborgs with some or all of a human face uncovered. **phandraw** a selection focused on drawn cyberpunk artwork with bright neon colors and defined linework **phandroid** this is where I pay most homage to nousrs robodiffusion, using only cyborgs with their faces concealed or just plain humanoid robots **phandustrial** futuristic ndustrial imagery of pipes wires and messes of cables. **phanfashion** trying to get that urbanwear hoodie look but with some variations. **phanfem** a series of cyberpunk women **phanglitch** Glitch art I had reblogged on the blog with a cyberpunk feel. Quite colorful. **phangrunge** Dilapidated dens for the scum of the city. Hopefully will add a good dose of urban decay to your prompt. **phanlogo** Sleek graphic design, typography and logos. **phanmachine** Built with unclear subject matter, phanmachine focuses on the details of futuristic shiny machinery in hopes of it coming out as a style or texture that can be applied in prompts. **phanmecha** The three cyborg keywords are sleek and humanoid, phanmecha focuses more on creating unique robot bodytypes. **phanmilitary** Future soldiers, man and machine. Likely to attach a gun to your prompt's character. **phanneon** Bright neon lights taking over the scene, this feature is what annoyed me with a lot of cyberpunk prompts in ai models. Overall I have it pretty isolated to this keyword, if you want those futuristic glowies. **phanrooms** Totally seperate to the rest of the theming, phanrooms is trained on backrooms and liminal space imagery. Which like cyberpunk is of high visual interest to me, and something the base model can sometimes struggle with. **phansterile** This is like cyberpunk cleancore, lots of white, very clean, clinical theming. **phantex** I don't know why latex outfits are cyberpunk but they just are, these images were selected for the accessorising on top of just the latex outfits. **phanture** Abstract textures that were cyberpunk enough for me to put on my blog.
motmono/ppo-LunarLander-v2
motmono
2022-10-27T20:39:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T20:39:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.74 +/- 15.00 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
andrewzhang505/sf2-lunar-lander
andrewzhang505
2022-10-27T19:51:07Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T19:50:47Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 126.58 +/- 137.36 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLanderContinuous-v2 type: LunarLanderContinuous-v2 --- A(n) **APPO** model trained on the **LunarLanderContinuous-v2** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
Aitor/testpyramidsrnd
Aitor
2022-10-27T19:45:32Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-10-27T19:45:24Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: Aitor/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
PraveenKishore/Reinforce-CartPole-v1
PraveenKishore
2022-10-27T18:59:10Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T18:50:31Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 94.10 +/- 36.62 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Eleusinian/haladas
Eleusinian
2022-10-27T18:07:35Z
0
0
null
[ "license:unknown", "region:us" ]
null
2022-10-27T18:00:03Z
--- license: unknown --- <div style='display: flex; flex-wrap: wrap; column-gap: 0.75rem;'> <img src='https://s3.amazonaws.com/moonup/production/uploads/1666893412370-noauth.jpeg' width='400' height='400'> <img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411703-noauth.jpeg' width='400' height='400'> <img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411826-noauth.jpeg' width='400' height='400'> <img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411866-noauth.jpeg' width='400' height='400'> </div>
vict0rsch/climateGAN
vict0rsch
2022-10-27T17:49:52Z
0
2
null
[ "Climate Change", "GAN", "Domain Adaptation", "en", "license:gpl-3.0", "region:us" ]
null
2022-10-24T13:17:28Z
--- language: - en tags: - Climate Change - GAN - Domain Adaptation license: gpl-3.0 title: ClimateGAN emoji: 🌎 colorFrom: blue colorTo: green sdk: gradio sdk_version: 3.6 app_file: app.py inference: true pinned: true --- # ClimateGAN: Raising Awareness about Climate Change by Generating Images of Floods This repository contains the code used to train the model presented in our **[paper](https://openreview.net/forum?id=EZNOb_uNpJk)**. It is not simply a presentation repository but the code we have used over the past 30 months to come to our final architecture. As such, you will find many scripts, classes, blocks and options which we actively use for our own development purposes but are not directly relevant to reproduce results or use pretrained weights. ![flood processing](images/flood.png) If you use this code, data or pre-trained weights, please cite our ICLR 2022 paper: ``` @inproceedings{schmidt2022climategan, title = {Climate{GAN}: Raising Climate Change Awareness by Generating Images of Floods}, author = {Victor Schmidt and Alexandra Luccioni and M{\'e}lisande Teng and Tianyu Zhang and Alexia Reynaud and Sunand Raghupathi and Gautier Cosne and Adrien Juraver and Vahe Vardanyan and Alex Hern{\'a}ndez-Garc{\'\i}a and Yoshua Bengio}, booktitle = {International Conference on Learning Representations}, year = {2022}, url = {https://openreview.net/forum?id=EZNOb_uNpJk} } ``` ## Using pre-trained weights from this Huggingface Space and Stable Diffusion In-painting <p align="center"> <strong>Huggingface ClimateGAN Space:</strong> <a href="https://huggingface.co/spaces/vict0rsch/climateGAN" target="_blank"> <img src="https://huggingface.co/vict0rsch/climateGAN/resolve/main/images/hf-cg.png"> </a> </p> 1. Download code and model ```bash git lfs install git clone https://huggingface.co/vict0rsch/climateGAN git lfs pull # optional if you don't have the weights ``` 2. Install requirements ``` pip install requirements.txt ``` 3. **Enable Stable Diffusion Inpainting** by visiting the model's card: https://huggingface.co/runwayml/stable-diffusion-inpainting **and** running `$ huggingface-cli login` 4. Run `$ python climategan_wrapper.py help` for usage instructions on how to infer on a folder's images. 5. Run `$ python app.py` to see the Gradio app. 1. To use Google Street View you'll need an API key and set the `GMAPS_API_KEY` environment variable. 2. To use Stable Diffusion if you can't run `$ huggingface-cli login` (on a Huggingface Space for instance) set the `HF_AUTH_TOKEN` env variable to a [Huggingface authorization token](https://huggingface.co/settings/tokens) 3. To change the UI without model overhead, set the `CG_DEV_MODE` environment variable to `true`. For a more fine-grained control on ClimateGAN's inferences, refer to `apply_events.py` (does not support Stable Diffusion painter) **Note:** you don't have control on the prompt by design because I disabled the safety checker. Fork this space/repo and do it yourself if you really need to change the prompt. At least [open a discussion](https://huggingface.co/spaces/vict0rsch/climateGAN/discussions). ## Using pre-trained weights from source In the paper, we present ClimateGAN as a solution to produce images of floods. It can actually do **more**: * reusing the segmentation map, we are able to isolate the sky, turn it red and in a few more steps create an image resembling the consequences of a wildfire on a neighboring area, similarly to the [California wildfires](https://www.google.com/search?q=california+wildfires+red+sky&source=lnms&tbm=isch&sa=X&ved=2ahUKEwisws-hx7zxAhXxyYUKHQyKBUwQ_AUoAXoECAEQBA&biw=1680&bih=917&dpr=2). * reusing the depth map, we can simulate the consequences of a smog event on an image, scaling the intensity of the filter by the distance of an object to the camera, as per [HazeRD](http://www2.ece.rochester.edu/~gsharma/papers/Zhang_ICIP2017_HazeRD.pdf) ![image of wildfire processing](images/wildfire.png) ![image of smog processing](images/smog.png) In this section we'll explain how to produce the `Painted Input` along with the Smog and Wildfire outputs of a pre-trained ClimateGAN model. ### Installation This repository and associated model have been developed using Python 3.8.2 and **Pytorch 1.7.0**. ```bash $ git clone git@github.com:cc-ai/climategan.git $ cd climategan $ pip install -r requirements-3.8.2.txt # or `requirements-any.txt` for other Python versions (not tested but expected to be fine) ``` Our pipeline uses [comet.ml](https://comet.ml) to log images. You don't *have* to use their services but we recommend you do as images can be uploaded on your workspace instead of being written to disk. If you want to use Comet, make sure you have the [appropriate configuration in place (API key and workspace at least)](https://www.comet.ml/docs/python-sdk/advanced/#non-interactive-setup) ### Inference 1. Download and unzip the weights [from this link](https://drive.google.com/u/0/uc?id=18OCUIy7JQ2Ow_-cC5xn_hhDn-Bp45N1K&export=download) (checkout [`gdown`](https://github.com/wkentaro/gdown) for a commandline interface) and put them in `config/` ``` $ pip install gdown $ mkdir config $ cd config $ gdown https://drive.google.com/u/0/uc?id=18OCUIy7JQ2Ow_-cC5xn_hhDn-Bp45N1K $ unzip release-github-v1.zip $ cd .. ``` 2. Run from the repo's root: 1. With `comet`: ```bash python apply_events.py --batch_size 4 --half --images_paths path/to/a/folder --resume_path config/model/masker --upload ``` 2. Without `comet` (and shortened args compared to the previous example): ```bash python apply_events.py -b 4 --half -i path/to/a/folder -r config/model/masker --output_path path/to/a/folder ``` The `apply_events.py` script has many options, for instance to use a different output size than the default systematic `640 x 640` pixels, look at the code or `python apply_events.py --help`. ## Training from scratch ClimateGAN is split in two main components: the Masker producing a binary mask of where water should go and the Painter generating water within this mask given an initial image's context. ### Configuration The code is structured to use `shared/trainer/defaults.yaml` as default configuration. There are 2 ways of overriding those for your purposes (without altering that file): 1. By providing an alternative configuration as command line argument `config=path/to/config.yaml` 1. The code will first load `shared/trainer/defaults.yaml` 2. *then* update the resulting dictionary with values read in the provided `config` argument. 3. The folder `config/` is NOT tracked by git so you would typically put them there 2. By overwriting specific arguments from the command-line like `python train.py data.loaders.batch_size=8` ### Data #### Masker ##### Real Images Because of copyrights issues we are not able to share the real images scrapped from the internet. You would have to do that yourself. In the `yaml` config file, the code expects a key pointing to a `json` file like `data.files.<train or val>.r: <path/to/a/json/file>`. This `json` file should be a list of dictionaries with tasks as keys and files as values. Example: ```json [ { "x": "path/to/a/real/image", "s": "path/to/a/segmentation_map", "d": "path/to/a/depth_map" }, ... ] ``` Following the [ADVENT](https://github.com/valeoai/ADVENT) procedure, only `x` should be required. We use `s` and `d` inferred from pre-trained models (DeepLab v3+ and MiDAS) to use those pseudo-labels in the first epochs of training (see `pseudo:` in the config file) ##### Simulated Images We share snapshots of the Virtual World we created in the [Mila-Simulated-Flood dataset](). You can download and unzip one water-level and then produce json files similar to that of the real data, with an additional key `"m": "path/to/a/ground_truth_sim_mask"`. Lastly, edit the config file: `data.files.<train or val>.s: <path/to/a/json/file>` #### Painter The painter expects input images and binary masks to train using the [GauGAN](https://github.com/NVlabs/SPADE) training procedure. Unfortunately we cannot share openly the collected data, but similarly as for the Masker's real data you would point to the data using a `json` file as: ```json [ { "x": "path/to/a/real/image", "m": "path/to/a/water_mask", }, ... ] ``` And put those files as values to `data.files.<train or val>.rf: <path/to/a/json/file>` in the configuration. ## Coding conventions * Tasks * `x` is an input image, in [-1, 1] * `s` is a segmentation target with `long` classes * `d` is a depth map target in R, may be actually `log(depth)` or `1/depth` * `m` is a binary mask with 1s where water is/should be * Domains * `r` is the *real* domain for the masker. Input images are real pictures of urban/suburban/rural areas * `s` is the *simulated* domain for the masker. Input images are taken from our Unity world * `rf` is the *real flooded* domain for the painter. Training images are pairs `(x, m)` of flooded scenes for which the water should be reconstructed, in the validation data input images are not flooded and we provide a manually labeled mask `m` * `kitti` is a special `s` domain to pre-train the masker on [Virtual Kitti 2](https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/) * it alters the `trainer.loaders` dict to select relevant data sources from `trainer.all_loaders` in `trainer.switch_data()`. The rest of the code is identical. * Flow * This describes the call stack for the trainers standard training procedure * `train()` * `run_epoch()` * `update_G()` * `zero_grad(G)` * `get_G_loss()` * `get_masker_loss()` * `masker_m_loss()` -> masking loss * `masker_s_loss()` -> segmentation loss * `masker_d_loss()` -> depth estimation loss * `get_painter_loss()` -> painter's loss * `g_loss.backward()` * `g_opt_step()` * `update_D()` * `zero_grad(D)` * `get_D_loss()` * painter's disc losses * `masker_m_loss()` -> masking AdvEnt disc loss * `masker_s_loss()` -> segmentation AdvEnt disc loss * `d_loss.backward()` * `d_opt_step()` * `update_learning_rates()` -> update learning rates according to schedules defined in `opts.gen.opt` and `opts.dis.opt` * `run_validation()` * compute val losses * `eval_images()` -> compute metrics * `log_comet_images()` -> compute and upload inferences * `save()`
Houryy/Houry
Houryy
2022-10-27T16:27:08Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-10-27T16:27:08Z
--- license: bigscience-openrail-m ---
hagerty7/recyclable-materials-classification
hagerty7
2022-10-27T15:54:32Z
42
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-24T15:10:05Z
ViT for Recyclable Material Classification
mgb-dx-meetup/distilbert-multilingual-finetuned-sentiment
mgb-dx-meetup
2022-10-27T15:43:10Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:lewtun/autotrain-data-mgb-product-reviews-mbert", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T15:34:22Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - lewtun/autotrain-data-mgb-product-reviews-mbert co2_eq_emissions: emissions: 5.523107849339405 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1904564767 - CO2 Emissions (in grams): 5.5231 ## Validation Metrics - Loss: 1.135 - Accuracy: 0.514 - Macro F1: 0.504 - Micro F1: 0.514 - Weighted F1: 0.505 - Macro Precision: 0.506 - Micro Precision: 0.514 - Weighted Precision: 0.507 - Macro Recall: 0.513 - Micro Recall: 0.514 - Weighted Recall: 0.514 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-mgb-product-reviews-mbert-1904564767 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lewtun/autotrain-mgb-product-reviews-mbert-1904564767", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-mgb-product-reviews-mbert-1904564767", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
mgb-dx-meetup/xlm-roberta-finetuned-sentiment
mgb-dx-meetup
2022-10-27T15:37:04Z
102
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:lewtun/autotrain-data-mgb-product-reviews-xlm-r", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T15:17:01Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - lewtun/autotrain-data-mgb-product-reviews-xlm-r co2_eq_emissions: emissions: 19.116414139555882 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1904264758 - CO2 Emissions (in grams): 19.1164 ## Validation Metrics - Loss: 1.021 - Accuracy: 0.563 - Macro F1: 0.555 - Micro F1: 0.563 - Weighted F1: 0.556 - Macro Precision: 0.555 - Micro Precision: 0.563 - Weighted Precision: 0.556 - Macro Recall: 0.562 - Micro Recall: 0.563 - Weighted Recall: 0.563 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-mgb-product-reviews-xlm-r-1904264758", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Sennodipoi/LayoutLMv3-FUNSD-ft
Sennodipoi
2022-10-27T15:29:16Z
5
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-23T08:14:07Z
LayoutLMv3 fine-tuned on the FUNSD dataset. Code and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm). Results obtained using seqeval in strict mode: | | Precision | Recall | F1-score | Variance (F1) | |--------------|-----------|--------|----------|---------------| | Answer | 0.90 | 0.91 | 0.90 | 3e-5 | | Header | 0.61 | 0.66 | 0.63 | 4e-4 | | Question | 0.88 | 0.87 | 0.88 | 1e-4 | | Micro avg | 0.87 | 0.88 | 0.87 | 3e-5 | | Macro avg | 0.79 | 0.82 | 0.80 | 3e-5 | | Weighted avg | 0.87 | 0.88 | 0.87 | 3e-5 |
Sennodipoi/LayoutLMv1-FUNSD-ft
Sennodipoi
2022-10-27T15:27:32Z
5
0
transformers
[ "transformers", "pytorch", "layoutlm", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-23T08:10:54Z
LayoutLMv1 fine-tuned on the FUNSD dataset. Code and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm). Results obtained using seqeval in strict mode: | | Precision | Recall | F1-score | Variance (F1) | |--------------|-----------|--------|----------|---------------| | ANSWER | 0.80 | 0.78 | 0.80 | 1e-4 | | HEADER | 0.62 | 0.47 | 0.53 | 2e-4 | | QUESTION | 0.85 | 0.71 | 0.83 | 3e-5 | | Micro avg | 0.83 | 0.77 | 0.81 | 1e-4 | | Macro avg | 0.77 | 0.56 | 0.72 | 3e-5 | | Weighted avg | 0.83 | 0.78 | 0.80 | 1e-4 |
Sennodipoi/LayoutLMv3-kleisterNDA
Sennodipoi
2022-10-27T15:26:00Z
5
1
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-24T15:26:45Z
LayoutLMv3 fine-tuned on the Kleister-NDA dataset. Code (including pre-processing) and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm). Results obtained with seqeval in strict mode: | | Precision | Recall | F1-score | Variance (F1) | |----------------|-----------|--------|----------|---------------| | EFFECTIVE_DATE | 0.92 | 0.99 | 0.95 | 5e-5 | | JURISDICTION | 0.87 | 0.88 | 0.88 | 8e-6 | | PARTY | 0.92 | 0.99 | 0.95 | 5e-5 | | TERM | 1 | 1 | 1 | 0 | | Micro avg | 0.91 | 0.96 | 0.94 | 2e-5 | | Macro avg | 0.92 | 0.96 | 0.94 | 3e-7 | | Weighted avg | 0.91 | 0.96 | 0.94 | 2e-5 | Since I used the same segmentation strategy of the original paper i.e. using the labels to create segments, the scores are not directly comparable with the other LayoutLM versions.
Sennodipoi/LayoutLMv1-kleisterNDA
Sennodipoi
2022-10-27T15:18:42Z
5
0
transformers
[ "transformers", "pytorch", "layoutlm", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-24T07:33:25Z
LayoutLMv1 fine-tuned on the Kleister-NDA dataset. Code (including pre-processing) and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm) Results obtained with seqeval in strict mode: | | Precision | Recall| F1-score | Variance (F1) | |--------------------------|--------------------|-----------------|-------------------|------------------------| | EFFECTIVE\_DATE | 0.87 | 0.51 | 0.64 | 2e-6 | | JURISDICTION | 0.75 | 0.84 | 0.80 | 4e-7 | | PARTY | 0.84 | 0.71 | 0.77 | 9e-6 | | TERM | 0.69 | 0.51 | 0.58 | 1e-3 | | Micro avg | 0.81 | 0.72 | 0.77 | 2e-6 | | Macro avg | 0.79 | 0.65 | 0.70 | 9e-5 | | Weighted avg | 0.82 | 0.73 | 0.76 | 3e-6 |
alanakbik/test-push-public
alanakbik
2022-10-27T15:10:07Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
2022-10-27T15:07:07Z
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("alanakbik/test-push-public") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
yubol/bert-finetuned-ner-30
yubol
2022-10-27T15:03:09Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-27T13:19:19Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0453 - Precision: 0.9275 - Recall: 0.9492 - F1: 0.9382 - Accuracy: 0.9934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 407 | 0.0539 | 0.8283 | 0.8758 | 0.8514 | 0.9866 | | 0.1524 | 2.0 | 814 | 0.0333 | 0.8931 | 0.9123 | 0.9026 | 0.9915 | | 0.0381 | 3.0 | 1221 | 0.0345 | 0.8835 | 0.9280 | 0.9052 | 0.9906 | | 0.0179 | 4.0 | 1628 | 0.0351 | 0.8890 | 0.9361 | 0.9119 | 0.9909 | | 0.0089 | 5.0 | 2035 | 0.0310 | 0.9102 | 0.9372 | 0.9235 | 0.9924 | | 0.0089 | 6.0 | 2442 | 0.0344 | 0.9198 | 0.9383 | 0.9289 | 0.9922 | | 0.0057 | 7.0 | 2849 | 0.0331 | 0.9144 | 0.9448 | 0.9294 | 0.9931 | | 0.0039 | 8.0 | 3256 | 0.0340 | 0.9144 | 0.9481 | 0.9309 | 0.9928 | | 0.0027 | 9.0 | 3663 | 0.0423 | 0.9032 | 0.9481 | 0.9251 | 0.9921 | | 0.0018 | 10.0 | 4070 | 0.0373 | 0.9047 | 0.9507 | 0.9271 | 0.9923 | | 0.0018 | 11.0 | 4477 | 0.0448 | 0.8932 | 0.9474 | 0.9195 | 0.9910 | | 0.0014 | 12.0 | 4884 | 0.0380 | 0.9079 | 0.9474 | 0.9272 | 0.9928 | | 0.0015 | 13.0 | 5291 | 0.0360 | 0.9231 | 0.9474 | 0.9351 | 0.9936 | | 0.0013 | 14.0 | 5698 | 0.0378 | 0.9243 | 0.9456 | 0.9348 | 0.9935 | | 0.0013 | 15.0 | 6105 | 0.0414 | 0.9197 | 0.9496 | 0.9344 | 0.9930 | | 0.0009 | 16.0 | 6512 | 0.0405 | 0.9202 | 0.9478 | 0.9338 | 0.9929 | | 0.0009 | 17.0 | 6919 | 0.0385 | 0.9305 | 0.9441 | 0.9373 | 0.9933 | | 0.0006 | 18.0 | 7326 | 0.0407 | 0.9285 | 0.9437 | 0.9360 | 0.9934 | | 0.0009 | 19.0 | 7733 | 0.0428 | 0.9203 | 0.9488 | 0.9343 | 0.9929 | | 0.0006 | 20.0 | 8140 | 0.0455 | 0.9232 | 0.9536 | 0.9382 | 0.9928 | | 0.0004 | 21.0 | 8547 | 0.0462 | 0.9261 | 0.9529 | 0.9393 | 0.9930 | | 0.0004 | 22.0 | 8954 | 0.0423 | 0.9359 | 0.9492 | 0.9425 | 0.9940 | | 0.0005 | 23.0 | 9361 | 0.0446 | 0.9180 | 0.9529 | 0.9351 | 0.9931 | | 0.0005 | 24.0 | 9768 | 0.0430 | 0.9361 | 0.9467 | 0.9413 | 0.9938 | | 0.0002 | 25.0 | 10175 | 0.0436 | 0.9322 | 0.9496 | 0.9408 | 0.9935 | | 0.0002 | 26.0 | 10582 | 0.0440 | 0.9275 | 0.9492 | 0.9382 | 0.9935 | | 0.0002 | 27.0 | 10989 | 0.0450 | 0.9272 | 0.9488 | 0.9379 | 0.9932 | | 0.0002 | 28.0 | 11396 | 0.0445 | 0.9304 | 0.9470 | 0.9386 | 0.9935 | | 0.0003 | 29.0 | 11803 | 0.0449 | 0.9278 | 0.9481 | 0.9378 | 0.9934 | | 0.0001 | 30.0 | 12210 | 0.0453 | 0.9275 | 0.9492 | 0.9382 | 0.9934 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pig4431/sst2_bert_3epoch
pig4431
2022-10-27T15:01:53Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-27T14:55:30Z
--- tags: - generated_from_trainer model-index: - name: sst2_bert_3epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2_bert_3epoch This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Shri3/q-FrozenLake-v1-4x4-noSlippery
Shri3
2022-10-27T14:33:14Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-27T14:07:26Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Shri3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
huggingtweets/tykesinties
huggingtweets
2022-10-27T14:31:37Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-25T19:33:52Z
--- language: en thumbnail: http://www.huggingtweets.com/tykesinties/1666881093237/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/917201427583438848/X-zHDjYL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">RegressCo H.R.</div> <div style="text-align: center; font-size: 14px;">@tykesinties</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from RegressCo H.R.. | Data | RegressCo H.R. | | --- | --- | | Tweets downloaded | 1844 | | Retweets | 215 | | Short tweets | 27 | | Tweets kept | 1602 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pqqtat7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tykesinties's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vqh1gov) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vqh1gov/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tykesinties') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
yeahrmek/arxiv-math-lean
yeahrmek
2022-10-27T14:05:48Z
0
0
null
[ "region:us" ]
null
2022-10-27T12:23:41Z
This is a BPE tokenizer based on "Salesforce/codegen-350M-mono". The tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not. We used ArXiv subset of The Pile dataset and proof steps from [lean-step-public](https://github.com/jesse-michael-han/lean-step-public) datasets to train the tokenizer.
OWG/imagegpt-small
OWG
2022-10-27T13:10:17Z
0
0
null
[ "onnx", "vision", "dataset:imagenet-21k", "license:apache-2.0", "region:us" ]
null
2022-10-27T11:52:39Z
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k --- # ImageGPT (small-sized model) ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/). ## Model description The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels. The goal for the model is simply to predict the next pixel value, given the previous ones. By pre-training the model, it learns an inner representation of images that can then be used to: - extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing". - perform (un)conditional image generation. ## Intended uses & limitations You can use the raw model for either feature extractor or (un) conditional image generation. ### How to use Here is how to use this model as feature extractor: ```python from transformers import AutoFeatureExtractor from onnxruntime import InferenceSession from datasets import load_dataset # load image dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] # load model feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small") session = InferenceSession("model/model.onnx") # ONNX Runtime expects NumPy arrays as input inputs = feature_extractor(image, return_tensors="np") outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` Or you can use the model with classification head that returns logits ```python from transformers import AutoFeatureExtractor from onnxruntime import InferenceSession from datasets import load_dataset # load image dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] # load model feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small") session = InferenceSession("model/model_classification.onnx") # ONNX Runtime expects NumPy arrays as input inputs = feature_extractor(image, return_tensors="np") outputs = session.run(output_names=["logits"], input_feed=dict(inputs)) ``` ## Original implementation Follow [this link](https://huggingface.co/openai/imagegpt-small) to see the original implementation. ## Training data The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models. ### Pretraining Training details can be found in section 3.4 of v2 of the paper. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to the original paper. ### BibTeX entry and citation info ```bibtex @InProceedings{pmlr-v119-chen20s, title = {Generative Pretraining From Pixels}, author = {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1691--1703}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf}, url = {https://proceedings.mlr.press/v119/chen20s.html } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
kevinbror/bertbaseuncasedny
kevinbror
2022-10-27T12:13:45Z
61
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-27T12:13:00Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bertbaseuncasedny results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bertbaseuncasedny This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3901 - Train End Logits Accuracy: 0.8823 - Train Start Logits Accuracy: 0.8513 - Validation Loss: 1.2123 - Validation End Logits Accuracy: 0.7291 - Validation Start Logits Accuracy: 0.6977 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.2597 | 0.6683 | 0.6277 | 1.0151 | 0.7214 | 0.6860 | 0 | | 0.7699 | 0.7820 | 0.7427 | 1.0062 | 0.7342 | 0.6996 | 1 | | 0.5343 | 0.8425 | 0.8064 | 1.1162 | 0.7321 | 0.7010 | 2 | | 0.3901 | 0.8823 | 0.8513 | 1.2123 | 0.7291 | 0.6977 | 3 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
kosec39/distilbert-base-uncased-finetuned-imdb
kosec39
2022-10-27T12:00:24Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-27T11:31:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Rijgersberg/whisper-small-fy-NL
Rijgersberg
2022-10-27T08:50:21Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-25T22:17:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: whisper-small-fy-NL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-fy-NL This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [CommonVoice 11 `fy-NL` (West-Frisian)](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/fy-NL/train) dataset. It achieves the following results on the evaluation set: - Loss: 0.5276 - Wer: 0.2919 The Wer before finetuning was 1.0622. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | | 0 | 0 | | 1.0622| | 0.9177 | 1.0 | 211 | 0.8145 | 0.3450 | | 0.5807 | 2.0 | 422 | 0.7113 | 0.3640 | | 0.2884 | 3.0 | 633 | 0.5276 | 0.2919 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
teacookies/autotrain-27102022-cert-1899564594
teacookies
2022-10-27T07:34:21Z
10
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-27102022-cert", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-27T07:21:17Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-27102022-cert co2_eq_emissions: emissions: 22.03607609264655 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1899564594 - CO2 Emissions (in grams): 22.0361 ## Validation Metrics - Loss: 0.003 - Accuracy: 0.999 - Precision: 0.981 - Recall: 0.982 - F1: 0.981 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-27102022-cert-1899564594 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-27102022-cert-1899564594", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-27102022-cert-1899564594", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
pouxie/LaBSE-en-ru-bviolet
pouxie
2022-10-27T07:21:10Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-27T04:29:25Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4095 with parameters: ``` {'batch_size': 8} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1228, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
teacookies/autotrain-27102022-cert1-1899464570
teacookies
2022-10-27T06:29:42Z
13
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-27102022-cert1", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-27T06:19:22Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-27102022-cert1 co2_eq_emissions: emissions: 16.254745105263574 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1899464570 - CO2 Emissions (in grams): 16.2547 ## Validation Metrics - Loss: 0.004 - Accuracy: 0.999 - Precision: 0.972 - Recall: 0.979 - F1: 0.975 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-27102022-cert1-1899464570 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/daymoded-menthalovely-scolopendridaes
huggingtweets
2022-10-27T05:43:26Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-27T05:26:45Z
--- language: en thumbnail: http://www.huggingtweets.com/daymoded-menthalovely-scolopendridaes/1666849354903/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1541285406531956736/T36HqJWY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1576010406446907395/cXmkdxpb_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1576595483157749760/GgLl95Ug_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">meri & Mentha & 𓆣</div> <div style="text-align: center; font-size: 14px;">@daymoded-menthalovely-scolopendridaes</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from meri & Mentha & 𓆣. | Data | meri | Mentha | 𓆣 | | --- | --- | --- | --- | | Tweets downloaded | 3208 | 3203 | 646 | | Retweets | 595 | 1723 | 407 | | Short tweets | 560 | 449 | 131 | | Tweets kept | 2053 | 1031 | 108 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ervd3sj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @daymoded-menthalovely-scolopendridaes's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28d01du3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28d01du3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/daymoded-menthalovely-scolopendridaes') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Negs/ddpm-butterflies-128
Negs
2022-10-27T04:07:05Z
4
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-27T02:51:00Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Negs/ddpm-butterflies-128/tensorboard?#scalars)
PKR/swin-tiny-patch4-window7-224-finetuned-eurosat
PKR
2022-10-27T03:21:42Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-27T02:53:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9814814814814815 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0593 - Accuracy: 0.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2731 | 1.0 | 190 | 0.1128 | 0.9637 | | 0.1862 | 2.0 | 380 | 0.0759 | 0.9759 | | 0.1409 | 3.0 | 570 | 0.0593 | 0.9815 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Alex-VisTas/swin-tiny-patch4-window7-224-finetuned-woody_130epochs
Alex-VisTas
2022-10-27T03:11:10Z
49
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-26T14:13:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-woody_130epochs results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8921212121212121 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-woody_130epochs This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4550 - Accuracy: 0.8921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 130 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6694 | 1.0 | 58 | 0.6370 | 0.6594 | | 0.6072 | 2.0 | 116 | 0.5813 | 0.7030 | | 0.6048 | 3.0 | 174 | 0.5646 | 0.7030 | | 0.5849 | 4.0 | 232 | 0.5778 | 0.6970 | | 0.5671 | 5.0 | 290 | 0.5394 | 0.7236 | | 0.5575 | 6.0 | 348 | 0.5212 | 0.7382 | | 0.568 | 7.0 | 406 | 0.5218 | 0.7358 | | 0.5607 | 8.0 | 464 | 0.5183 | 0.7527 | | 0.5351 | 9.0 | 522 | 0.5138 | 0.7467 | | 0.5459 | 10.0 | 580 | 0.5290 | 0.7394 | | 0.5454 | 11.0 | 638 | 0.5212 | 0.7345 | | 0.5291 | 12.0 | 696 | 0.5130 | 0.7576 | | 0.5378 | 13.0 | 754 | 0.5372 | 0.7503 | | 0.5264 | 14.0 | 812 | 0.6089 | 0.6861 | | 0.4909 | 15.0 | 870 | 0.4852 | 0.7636 | | 0.5591 | 16.0 | 928 | 0.4817 | 0.76 | | 0.4966 | 17.0 | 986 | 0.5673 | 0.6933 | | 0.4988 | 18.0 | 1044 | 0.5131 | 0.7418 | | 0.5339 | 19.0 | 1102 | 0.4998 | 0.7394 | | 0.4804 | 20.0 | 1160 | 0.4655 | 0.7733 | | 0.503 | 21.0 | 1218 | 0.4554 | 0.7685 | | 0.4859 | 22.0 | 1276 | 0.4713 | 0.7770 | | 0.504 | 23.0 | 1334 | 0.4545 | 0.7721 | | 0.478 | 24.0 | 1392 | 0.4658 | 0.7830 | | 0.4759 | 25.0 | 1450 | 0.4365 | 0.8012 | | 0.4686 | 26.0 | 1508 | 0.4452 | 0.7855 | | 0.4668 | 27.0 | 1566 | 0.4427 | 0.7879 | | 0.4615 | 28.0 | 1624 | 0.4439 | 0.7685 | | 0.4588 | 29.0 | 1682 | 0.4378 | 0.7830 | | 0.4588 | 30.0 | 1740 | 0.4229 | 0.7988 | | 0.4296 | 31.0 | 1798 | 0.4188 | 0.7976 | | 0.4208 | 32.0 | 1856 | 0.4316 | 0.7891 | | 0.4481 | 33.0 | 1914 | 0.4331 | 0.7891 | | 0.4253 | 34.0 | 1972 | 0.4524 | 0.7879 | | 0.4117 | 35.0 | 2030 | 0.4570 | 0.7952 | | 0.4405 | 36.0 | 2088 | 0.4307 | 0.7927 | | 0.4154 | 37.0 | 2146 | 0.4257 | 0.8024 | | 0.3962 | 38.0 | 2204 | 0.5077 | 0.7818 | | 0.414 | 39.0 | 2262 | 0.4602 | 0.8012 | | 0.3937 | 40.0 | 2320 | 0.4741 | 0.7770 | | 0.4186 | 41.0 | 2378 | 0.4250 | 0.8 | | 0.4076 | 42.0 | 2436 | 0.4353 | 0.7988 | | 0.3777 | 43.0 | 2494 | 0.4442 | 0.7879 | | 0.3968 | 44.0 | 2552 | 0.4525 | 0.7879 | | 0.377 | 45.0 | 2610 | 0.4198 | 0.7988 | | 0.378 | 46.0 | 2668 | 0.4297 | 0.8097 | | 0.3675 | 47.0 | 2726 | 0.4435 | 0.8085 | | 0.3562 | 48.0 | 2784 | 0.4477 | 0.7952 | | 0.381 | 49.0 | 2842 | 0.4206 | 0.8255 | | 0.3603 | 50.0 | 2900 | 0.4136 | 0.8109 | | 0.3331 | 51.0 | 2958 | 0.4141 | 0.8230 | | 0.3471 | 52.0 | 3016 | 0.4253 | 0.8109 | | 0.346 | 53.0 | 3074 | 0.5203 | 0.8048 | | 0.3481 | 54.0 | 3132 | 0.4288 | 0.8242 | | 0.3411 | 55.0 | 3190 | 0.4416 | 0.8194 | | 0.3275 | 56.0 | 3248 | 0.4149 | 0.8291 | | 0.3067 | 57.0 | 3306 | 0.4623 | 0.8218 | | 0.3166 | 58.0 | 3364 | 0.4432 | 0.8255 | | 0.3294 | 59.0 | 3422 | 0.4599 | 0.8267 | | 0.3146 | 60.0 | 3480 | 0.4266 | 0.8291 | | 0.3091 | 61.0 | 3538 | 0.4318 | 0.8315 | | 0.3277 | 62.0 | 3596 | 0.4252 | 0.8242 | | 0.296 | 63.0 | 3654 | 0.4332 | 0.8436 | | 0.3241 | 64.0 | 3712 | 0.4729 | 0.8194 | | 0.3104 | 65.0 | 3770 | 0.4228 | 0.8448 | | 0.2878 | 66.0 | 3828 | 0.4173 | 0.8388 | | 0.265 | 67.0 | 3886 | 0.4210 | 0.8497 | | 0.3011 | 68.0 | 3944 | 0.4276 | 0.8436 | | 0.2861 | 69.0 | 4002 | 0.4923 | 0.8315 | | 0.2994 | 70.0 | 4060 | 0.4472 | 0.8182 | | 0.276 | 71.0 | 4118 | 0.4541 | 0.8315 | | 0.2796 | 72.0 | 4176 | 0.4218 | 0.8521 | | 0.2727 | 73.0 | 4234 | 0.4053 | 0.8448 | | 0.255 | 74.0 | 4292 | 0.4356 | 0.8376 | | 0.276 | 75.0 | 4350 | 0.4193 | 0.8436 | | 0.261 | 76.0 | 4408 | 0.4484 | 0.8533 | | 0.2416 | 77.0 | 4466 | 0.4722 | 0.8194 | | 0.2602 | 78.0 | 4524 | 0.4431 | 0.8533 | | 0.2591 | 79.0 | 4582 | 0.4269 | 0.8606 | | 0.2613 | 80.0 | 4640 | 0.4335 | 0.8485 | | 0.2555 | 81.0 | 4698 | 0.4269 | 0.8594 | | 0.2832 | 82.0 | 4756 | 0.3968 | 0.8715 | | 0.264 | 83.0 | 4814 | 0.4173 | 0.8703 | | 0.2462 | 84.0 | 4872 | 0.4150 | 0.8606 | | 0.2424 | 85.0 | 4930 | 0.4377 | 0.8630 | | 0.2574 | 86.0 | 4988 | 0.4120 | 0.8679 | | 0.2273 | 87.0 | 5046 | 0.4393 | 0.8533 | | 0.2334 | 88.0 | 5104 | 0.4366 | 0.8630 | | 0.2258 | 89.0 | 5162 | 0.4189 | 0.8630 | | 0.2153 | 90.0 | 5220 | 0.4474 | 0.8630 | | 0.2462 | 91.0 | 5278 | 0.4362 | 0.8642 | | 0.2356 | 92.0 | 5336 | 0.4454 | 0.8715 | | 0.2019 | 93.0 | 5394 | 0.4413 | 0.88 | | 0.209 | 94.0 | 5452 | 0.4410 | 0.8703 | | 0.2201 | 95.0 | 5510 | 0.4323 | 0.8691 | | 0.2245 | 96.0 | 5568 | 0.4999 | 0.8618 | | 0.2178 | 97.0 | 5626 | 0.4612 | 0.8655 | | 0.2163 | 98.0 | 5684 | 0.4340 | 0.8703 | | 0.2228 | 99.0 | 5742 | 0.4504 | 0.8788 | | 0.2151 | 100.0 | 5800 | 0.4602 | 0.8703 | | 0.1988 | 101.0 | 5858 | 0.4414 | 0.8812 | | 0.2227 | 102.0 | 5916 | 0.4392 | 0.8824 | | 0.1772 | 103.0 | 5974 | 0.5069 | 0.8630 | | 0.2199 | 104.0 | 6032 | 0.4648 | 0.8667 | | 0.1936 | 105.0 | 6090 | 0.4806 | 0.8691 | | 0.199 | 106.0 | 6148 | 0.4569 | 0.8764 | | 0.2149 | 107.0 | 6206 | 0.4445 | 0.8739 | | 0.1917 | 108.0 | 6264 | 0.4444 | 0.8727 | | 0.201 | 109.0 | 6322 | 0.4594 | 0.8727 | | 0.1938 | 110.0 | 6380 | 0.4564 | 0.8764 | | 0.1977 | 111.0 | 6438 | 0.4398 | 0.8739 | | 0.1776 | 112.0 | 6496 | 0.4356 | 0.88 | | 0.1939 | 113.0 | 6554 | 0.4412 | 0.8848 | | 0.178 | 114.0 | 6612 | 0.4373 | 0.88 | | 0.1926 | 115.0 | 6670 | 0.4508 | 0.8812 | | 0.1979 | 116.0 | 6728 | 0.4477 | 0.8848 | | 0.1958 | 117.0 | 6786 | 0.4488 | 0.8897 | | 0.189 | 118.0 | 6844 | 0.4553 | 0.8836 | | 0.1838 | 119.0 | 6902 | 0.4605 | 0.8848 | | 0.1755 | 120.0 | 6960 | 0.4463 | 0.8836 | | 0.1958 | 121.0 | 7018 | 0.4474 | 0.8861 | | 0.1857 | 122.0 | 7076 | 0.4550 | 0.8921 | | 0.1466 | 123.0 | 7134 | 0.4494 | 0.8885 | | 0.1751 | 124.0 | 7192 | 0.4560 | 0.8873 | | 0.175 | 125.0 | 7250 | 0.4383 | 0.8897 | | 0.207 | 126.0 | 7308 | 0.4601 | 0.8873 | | 0.1756 | 127.0 | 7366 | 0.4425 | 0.8897 | | 0.1695 | 128.0 | 7424 | 0.4533 | 0.8909 | | 0.1873 | 129.0 | 7482 | 0.4510 | 0.8897 | | 0.1726 | 130.0 | 7540 | 0.4463 | 0.8909 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
sd-concepts-library/msg
sd-concepts-library
2022-10-27T00:39:23Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-10-27T00:39:18Z
--- license: mit --- ### MSG on Stable Diffusion This is the `<MSG69>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<MSG69> 0](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/5.jpeg) ![<MSG69> 1](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/8.jpeg) ![<MSG69> 2](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/12.jpeg) ![<MSG69> 3](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/15.jpeg) ![<MSG69> 4](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/17.jpeg) ![<MSG69> 5](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/24.jpeg) ![<MSG69> 6](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/3.jpeg) ![<MSG69> 7](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/0.jpeg) ![<MSG69> 8](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/22.jpeg) ![<MSG69> 9](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/18.jpeg) ![<MSG69> 10](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/25.jpeg) ![<MSG69> 11](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/23.jpeg) ![<MSG69> 12](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/9.jpeg) ![<MSG69> 13](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/16.jpeg) ![<MSG69> 14](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/6.jpeg) ![<MSG69> 15](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/19.jpeg) ![<MSG69> 16](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/2.jpeg) ![<MSG69> 17](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/13.jpeg) ![<MSG69> 18](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/14.jpeg) ![<MSG69> 19](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/1.jpeg) ![<MSG69> 20](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/26.jpeg) ![<MSG69> 21](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/10.jpeg) ![<MSG69> 22](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/20.jpeg) ![<MSG69> 23](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/4.jpeg) ![<MSG69> 24](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/7.jpeg) ![<MSG69> 25](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/11.jpeg) ![<MSG69> 26](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/21.jpeg)
huggingtweets/_a_bat
huggingtweets
2022-10-26T23:12:22Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T23:08:36Z
--- language: en thumbnail: http://www.huggingtweets.com/_a_bat/1666825888934/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2415729722/9rhiyt5scbbzagfdxrx2_400x400.gif&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Taw - version2.bat</div> <div style="text-align: center; font-size: 14px;">@_a_bat</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Taw - version2.bat. | Data | Taw - version2.bat | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 336 | | Short tweets | 258 | | Tweets kept | 2653 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2fdjcy6g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_a_bat's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n2exl5h2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n2exl5h2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_a_bat') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
JoAmps/littledatasets
JoAmps
2022-10-26T22:20:47Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-26T22:05:02Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: littledatasets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # littledatasets This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 85 | 0.0053 | | No log | 2.0 | 170 | 0.0002 | | No log | 3.0 | 255 | 0.0001 | | No log | 4.0 | 340 | 0.0001 | | No log | 5.0 | 425 | 0.0001 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1
PraveenKishore/MLAgents-Pyramids
PraveenKishore
2022-10-26T21:59:04Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-10-26T21:32:30Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: PraveenKishore/MLAgents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
JoAmps/littledataset
JoAmps
2022-10-26T21:53:03Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-26T21:39:48Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: littledataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # littledataset This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 169 | 0.0001 | | No log | 2.0 | 338 | 0.0000 | | 0.0036 | 3.0 | 507 | 0.0001 | | 0.0036 | 4.0 | 676 | 0.0000 | | 0.0036 | 5.0 | 845 | 0.0000 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1
huggingtweets/big___oven-schizo_freq
huggingtweets
2022-10-26T21:50:36Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T17:42:08Z
--- language: en thumbnail: http://www.huggingtweets.com/big___oven-schizo_freq/1666821031327/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1582126821025382400/PZjx83du_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & Lukas (computer)</div> <div style="text-align: center; font-size: 14px;">@big___oven-schizo_freq</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from oskcar & Lukas (computer). | Data | oskcar | Lukas (computer) | | --- | --- | --- | | Tweets downloaded | 2642 | 3234 | | Retweets | 605 | 480 | | Short tweets | 325 | 326 | | Tweets kept | 1712 | 2428 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t7nn481m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-schizo_freq's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ljhfklh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ljhfklh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/big___oven-schizo_freq') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
doodlevelyn/xlm-roberta-large-finetuned-conll03-english
doodlevelyn
2022-10-26T21:38:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T03:45:25Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-large-finetuned-conll03-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-conll03-english This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6008 - Precision: 0.4263 - Recall: 0.1404 - F1: 0.2112 - Accuracy: 0.9559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0001 | 1.0 | 29460 | 0.6008 | 0.4263 | 0.1404 | 0.2112 | 0.9559 | | 0.0 | 2.0 | 58920 | 0.6008 | 0.4263 | 0.1404 | 0.2112 | 0.9559 | | 0.0001 | 3.0 | 88380 | 0.6008 | 0.4263 | 0.1404 | 0.2112 | 0.9559 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/nearcyan
huggingtweets
2022-10-26T21:10:01Z
8
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T21:08:44Z
--- language: en thumbnail: http://www.huggingtweets.com/nearcyan/1666818597137/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1446575702439043077/kNKnkoyI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">nearcyan</div> <div style="text-align: center; font-size: 14px;">@nearcyan</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from nearcyan. | Data | nearcyan | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 132 | | Short tweets | 136 | | Tweets kept | 2978 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ilun9vdk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nearcyan's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16w8mubo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16w8mubo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nearcyan') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
andrewzhang505/doom_test
andrewzhang505
2022-10-26T20:56:17Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
reinforcement-learning
2022-10-26T20:54:41Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory --- A(n) **APPO** model trained on the **doom_deathmatch_bots** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
huggingtweets/big___oven-naamitee
huggingtweets
2022-10-26T20:15:40Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T20:12:26Z
--- language: en thumbnail: http://www.huggingtweets.com/big___oven-naamitee/1666815335749/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1548322756059545605/ndrcvhSk_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bymyamym & oskcar</div> <div style="text-align: center; font-size: 14px;">@big___oven-naamitee</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bymyamym & oskcar. | Data | bymyamym | oskcar | | --- | --- | --- | | Tweets downloaded | 168 | 2628 | | Retweets | 45 | 605 | | Short tweets | 41 | 325 | | Tweets kept | 82 | 1698 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/drhgr3vu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-naamitee's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vrwpswox) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vrwpswox/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/big___oven-naamitee') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sd-concepts-library/cute-game-style
sd-concepts-library
2022-10-26T19:06:50Z
0
23
null
[ "license:mit", "region:us" ]
null
2022-10-26T18:31:32Z
--- license: mit --- ### Cute Game Style on Stable Diffusion This is the `<cute-game-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<cute-game-style> 0](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/1.jpeg) ![<cute-game-style> 1](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/6.jpeg) ![<cute-game-style> 2](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/2.jpeg) ![<cute-game-style> 3](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/4.jpeg) ![<cute-game-style> 4](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/0.jpeg) ![<cute-game-style> 5](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/3.jpeg) ![<cute-game-style> 6](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/7.jpeg) ![<cute-game-style> 7](https://huggingface.co/sd-concepts-library/cute-game-style/resolve/main/concept_images/5.jpeg) Here are images generated with this style: ![painting of a house in the style of <cute-game-style>](https://i.imgur.com/msUaazE.png) ![a beautiful pond in the style of <cute-game-style>](https://i.imgur.com/MVfHS33.png) ![painting of the colourful and lush interior of a greenhouse in the style of <cute-game-style>](https://i.imgur.com/WZJfoo9.png) ![cute isometric office building in the style of <cute-game-style>](https://i.imgur.com/1B1NRKh.png)
kevinbror/whynotwork
kevinbror
2022-10-26T19:02:37Z
3
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-26T19:02:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whynotwork results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whynotwork This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2892 - Train End Logits Accuracy: 0.6617 - Train Start Logits Accuracy: 0.6190 - Validation Loss: 1.0393 - Validation End Logits Accuracy: 0.7213 - Validation Start Logits Accuracy: 0.6877 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7377, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.2892 | 0.6617 | 0.6190 | 1.0393 | 0.7213 | 0.6877 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
huggingtweets/snobrights
huggingtweets
2022-10-26T18:18:39Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T18:17:24Z
--- language: en thumbnail: http://www.huggingtweets.com/snobrights/1666808315124/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1562231899925397504/PZnUZWaV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">vote4ana</div> <div style="text-align: center; font-size: 14px;">@snobrights</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from vote4ana. | Data | vote4ana | | --- | --- | | Tweets downloaded | 1947 | | Retweets | 510 | | Short tweets | 353 | | Tweets kept | 1084 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/163lcflh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @snobrights's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6bnd5aob) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6bnd5aob/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/snobrights') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
PraveenKishore/dqn-SpaceInvadersNoFrameskip-v4
PraveenKishore
2022-10-26T18:07:45Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-26T18:07:09Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 626.50 +/- 127.69 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PraveenKishore ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
huggingtweets/gretathotburg-snobrights
huggingtweets
2022-10-26T17:59:14Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T16:41:44Z
--- language: en thumbnail: http://www.huggingtweets.com/gretathotburg-snobrights/1666807149420/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1551255816992350210/yjE--1UN_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1562231899925397504/PZnUZWaV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">cathy & vote4ana</div> <div style="text-align: center; font-size: 14px;">@gretathotburg-snobrights</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from cathy & vote4ana. | Data | cathy | vote4ana | | --- | --- | --- | | Tweets downloaded | 1107 | 1948 | | Retweets | 254 | 511 | | Short tweets | 362 | 353 | | Tweets kept | 491 | 1084 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2129jbxh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathotburg-snobrights's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dq4zw12) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dq4zw12/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gretathotburg-snobrights') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
asparius/big-balanced-combined-bert
asparius
2022-10-26T17:56:54Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T19:41:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: big-balanced-combined-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # big-balanced-combined-bert This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2872 - Accuracy: 0.9055 - F1: 0.9061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
NeelNanda/SoLU_2L_v10_old
NeelNanda
2022-10-26T17:13:59Z
71
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2022-10-12T08:57:58Z
A 2L, width 736 SoLU model trained on 15B tokens of the Pile. Bugs: the layernorm just before the unembed is an RMS norm, and the width is not a multiple of 64, so d_head=64 and n_heads=11, and n_heads * d_head != d_model :(
rjac/setfit-ST-ICD10-L3
rjac
2022-10-26T16:28:30Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-26T16:28:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1349 with parameters: ``` {'batch_size': 450, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1349, "warmup_steps": 135, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Jsjjdnwjskxij6/Ffg
Jsjjdnwjskxij6
2022-10-26T15:24:13Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-10-26T15:24:13Z
--- license: bigscience-bloom-rail-1.0 ---
pig4431/rtm_ALBERT_5E
pig4431
2022-10-26T15:04:14Z
5
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "dataset:rotten_tomatoes", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T15:03:22Z
--- tags: - generated_from_trainer datasets: - rotten_tomatoes model-index: - name: model_output_dir results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output_dir This model was trained from scratch on the rotten_tomatoes dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
YumaSaito/distilbert-base-uncased-finetuned-emotion
YumaSaito
2022-10-26T15:03:55Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-23T14:15:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261092845869646 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2181 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8618 | 1.0 | 250 | 0.3206 | 0.903 | 0.8990 | | 0.2549 | 2.0 | 500 | 0.2181 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
yyyyifan/mlkiadapter
yyyyifan
2022-10-26T14:47:27Z
0
0
null
[ "arxiv:2210.13617", "region:us" ]
null
2022-10-26T14:45:18Z
Pretrained adapters for multilingual knowledge graph enhancement (https://arxiv.org/abs/2210.13617). --- license: mit ---
gstqtfr/ddpm-butterflies-128
gstqtfr
2022-10-26T13:57:03Z
1
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-25T17:02:11Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/gstqtfr/ddpm-butterflies-128/tensorboard?#scalars)
pig4431/rtm_BERT_5E
pig4431
2022-10-26T13:44:44Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:rotten_tomatoes", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T13:38:36Z
--- tags: - generated_from_trainer datasets: - rotten_tomatoes model-index: - name: rtm_bert_5E results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rtm_bert_5E This model was trained from scratch on the rotten_tomatoes dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
mrm8488/codebert-base-finetuned-code-ner-15e
mrm8488
2022-10-26T13:42:00Z
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T11:57:15Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: codebert-base-finetuned-code-ner-15e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert-base-finetuned-code-ner-15e This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3831 - Precision: 0.6363 - Recall: 0.6494 - F1: 0.6428 - Accuracy: 0.9197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 191 | 0.4566 | 0.5021 | 0.4220 | 0.4585 | 0.8827 | | No log | 2.0 | 382 | 0.3756 | 0.5699 | 0.5764 | 0.5731 | 0.9043 | | 0.5133 | 3.0 | 573 | 0.3605 | 0.6001 | 0.5767 | 0.5882 | 0.9093 | | 0.5133 | 4.0 | 764 | 0.3500 | 0.6130 | 0.6130 | 0.6130 | 0.9153 | | 0.5133 | 5.0 | 955 | 0.3501 | 0.6337 | 0.6172 | 0.6254 | 0.9178 | | 0.2203 | 6.0 | 1146 | 0.3645 | 0.6250 | 0.6352 | 0.6300 | 0.9163 | | 0.2203 | 7.0 | 1337 | 0.3488 | 0.6263 | 0.6422 | 0.6341 | 0.9189 | | 0.1457 | 8.0 | 1528 | 0.3575 | 0.6372 | 0.6397 | 0.6384 | 0.9194 | | 0.1457 | 9.0 | 1719 | 0.3662 | 0.6406 | 0.6343 | 0.6375 | 0.9189 | | 0.1457 | 10.0 | 1910 | 0.3613 | 0.6374 | 0.6473 | 0.6423 | 0.9201 | | 0.107 | 11.0 | 2101 | 0.3716 | 0.6329 | 0.6544 | 0.6435 | 0.9197 | | 0.107 | 12.0 | 2292 | 0.3754 | 0.6328 | 0.6487 | 0.6406 | 0.9193 | | 0.107 | 13.0 | 2483 | 0.3826 | 0.6395 | 0.6490 | 0.6443 | 0.9204 | | 0.0863 | 14.0 | 2674 | 0.3821 | 0.6368 | 0.6535 | 0.6451 | 0.9200 | | 0.0863 | 15.0 | 2865 | 0.3831 | 0.6363 | 0.6494 | 0.6428 | 0.9197 | ### Evaluation results | | Algorithm | Application | Class | Code_Block | Data_Structure | Data_Type | Device | Error_Name | File_Name | File_Type | Function | HTML_XML_Tag | Keyboard_IP | Language | Library | Operating_System | Output_Block | User_Interface_Element | User_Name | Value | Variable | Version | Website | overall_precision | overall_recall | overall_f1 | overall_accuracy | |:----------|------------:|--------------:|------------:|-------------:|-----------------:|------------:|----------:|-------------:|------------:|------------:|-----------:|---------------:|--------------:|-----------:|-----------:|-------------------:|---------------:|-------------------------:|------------:|-----------:|-----------:|-----------:|----------:|--------------------:|-----------------:|-------------:|-------------------:| | precision | 0 | 0.619835 | 0.680851 | 0.455629 | 0.813187 | 0.592593 | 0.395062 | 0.181818 | 0.800505 | 0.775956 | 0.757664 | 0.585366 | 0.333333 | 0.689769 | 0.61807 | 0.769231 | 0.0212766 | 0.542214 | 0.4375 | 0.370236 | 0.560479 | 0.883721 | 0.382353 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | recall | 0 | 0.677711 | 0.696864 | 0.494253 | 0.840909 | 0.8 | 0.533333 | 0.333333 | 0.794486 | 0.628319 | 0.631387 | 0.470588 | 0.0169492 | 0.81323 | 0.546279 | 0.843373 | 0.04 | 0.653846 | 0.518519 | 0.52987 | 0.54482 | 0.914089 | 0.270833 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | f1 | 0 | 0.647482 | 0.688765 | 0.474156 | 0.826816 | 0.680851 | 0.453901 | 0.235294 | 0.797484 | 0.694377 | 0.688786 | 0.521739 | 0.0322581 | 0.746429 | 0.579961 | 0.804598 | 0.0277778 | 0.592821 | 0.474576 | 0.435897 | 0.552538 | 0.898649 | 0.317073 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | | number | 31 | 664 | 1148 | 696 | 264 | 120 | 60 | 30 | 798 | 226 | 822 | 102 | 59 | 257 | 551 | 83 | 25 | 442 | 54 | 385 | 859 | 291 | 48 | 0.626308 | 0.642171 | 0.63414 | 0.918927 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Karelito00/swin-tiny-patch4-window7-224-finetuned-eurosat
Karelito00
2022-10-26T13:40:05Z
59
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-26T13:15:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9822222222222222 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 - Accuracy: 0.9822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3259 | 1.0 | 379 | 0.0760 | 0.9763 | | 0.1882 | 2.0 | 758 | 0.0694 | 0.9778 | | 0.1563 | 3.0 | 1137 | 0.0501 | 0.9822 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
lafayettecreditrepair/Credit-Repair-Services-Lafayette
lafayettecreditrepair
2022-10-26T13:08:33Z
0
0
null
[ "region:us" ]
null
2022-10-26T13:07:58Z
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals. We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable. Follow this [link](https://lafayette.asapcreditrepairusa.com/)
KGsteven/distilbert-base-uncased-finetuned-cola
KGsteven
2022-10-26T12:36:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T11:25:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3038 - Matthews Correlation: 0.9198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 1.2169 | 1.0 | 626 | 0.6782 | 0.8605 | | 0.5513 | 2.0 | 1252 | 0.4085 | 0.8998 | | 0.343 | 3.0 | 1878 | 0.3346 | 0.9122 | | 0.1642 | 4.0 | 2504 | 0.3106 | 0.9165 | | 0.1216 | 5.0 | 3130 | 0.3038 | 0.9198 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.1
huggingtweets/femoidfurry
huggingtweets
2022-10-26T11:56:36Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/femoidfurry/1666785376927/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1569453578493763590/MerXNdrF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">shitbrain dyke upside down era</div> <div style="text-align: center; font-size: 14px;">@femoidfurry</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from shitbrain dyke upside down era. | Data | shitbrain dyke upside down era | | --- | --- | | Tweets downloaded | 3211 | | Retweets | 1977 | | Short tweets | 106 | | Tweets kept | 1128 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34ui7fp9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @femoidfurry's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/177yzikv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/177yzikv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/femoidfurry') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
chintagunta85/biobert-base-cased-v1.2-bc2gm-ner
chintagunta85
2022-10-26T11:38:53Z
30
3
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:bc2gm_corpus", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T10:46:44Z
--- tags: - generated_from_trainer datasets: - bc2gm_corpus metrics: - precision - recall - f1 - accuracy model-index: - name: biobert-base-cased-v1.2-bc2gm-ner results: - task: name: Token Classification type: token-classification dataset: name: bc2gm_corpus type: bc2gm_corpus config: bc2gm_corpus split: train args: bc2gm_corpus metrics: - name: Precision type: precision value: 0.7988356059445381 - name: Recall type: recall value: 0.8243478260869566 - name: F1 type: f1 value: 0.8113912231559292 - name: Accuracy type: accuracy value: 0.9772069842818806 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2-bc2gm-ner This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the bc2gm_corpus dataset. It achieves the following results on the evaluation set: - Loss: 0.1528 - Precision: 0.7988 - Recall: 0.8243 - F1: 0.8114 - Accuracy: 0.9772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.057 | 1.0 | 782 | 0.0670 | 0.7446 | 0.8051 | 0.7736 | 0.9738 | | 0.0586 | 2.0 | 1564 | 0.0689 | 0.7689 | 0.8106 | 0.7892 | 0.9755 | | 0.0123 | 3.0 | 2346 | 0.0715 | 0.7846 | 0.8076 | 0.7959 | 0.9750 | | 0.0002 | 4.0 | 3128 | 0.0896 | 0.7942 | 0.8199 | 0.8068 | 0.9767 | | 0.0004 | 5.0 | 3910 | 0.1119 | 0.7971 | 0.8201 | 0.8084 | 0.9765 | | 0.0004 | 6.0 | 4692 | 0.1192 | 0.7966 | 0.8337 | 0.8147 | 0.9768 | | 0.013 | 7.0 | 5474 | 0.1274 | 0.7932 | 0.8266 | 0.8095 | 0.9773 | | 0.0236 | 8.0 | 6256 | 0.1419 | 0.7976 | 0.8213 | 0.8093 | 0.9771 | | 0.0004 | 9.0 | 7038 | 0.1519 | 0.8004 | 0.8261 | 0.8130 | 0.9772 | | 0.0 | 10.0 | 7820 | 0.1528 | 0.7988 | 0.8243 | 0.8114 | 0.9772 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
israel/byt5_en_am
israel
2022-10-26T10:10:40Z
14
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "am", "dataset:sample", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-25T09:31:06Z
--- language: - am datasets: - sample license: cc-by-4.0 ---
NchuNLP/Legal-Document-Question-Answering
NchuNLP
2022-10-26T09:45:48Z
178
5
transformers
[ "transformers", "pytorch", "bert", "question-answering", "zh", "dataset:LegalDocumentDataset", "endpoints_compatible", "region:us" ]
question-answering
2022-10-17T08:21:34Z
--- language: zh datasets: - LegalDocumentDataset --- # bert-base-chinese for QA This is the [bert-base-chinese](https://huggingface.co/bert-base-chinese) model, fine-tuned using the Legal Document Dataset. It's been trained on question-answer pairs for the task of Question Answering. ## Usage ### In Transformers ```python from transformers import BertTokenizerFast, BertForQuestionAnswering, pipeline model_name = "NchuNLP/Legal-Document-Question-Answering" tokenizer = BertTokenizerFast.from_pretrained(model_name) model = BertForQuestionAnswering.from_pretrained(model_name) # a) Get predictions nlp = pipeline('question-answering', model=model, tokenizer=tokenizer) QA_input = { 'question': '被告人偽造了什麼文書?', 'context': '犯罪事實一、韓金虎在采豐開發有限公司(址設臺北市○○區○○路0段000巷00○0號,下稱采豐公司)擔任臨時派遣員工,詎其竟意圖為自己不法之所有,基於行使偽造私文書、詐欺取財等犯意,於民國110年9月2日下午5時20分前某時許,在不詳地點,在采豐公司所使用之空白工作確認單中主任簽名欄上偽簽謝宏奇之簽名,佯裝其有於110年9月1日到班工作,並經工地主任確認之意,提出與采豐公司主任曾子昕而行使之,曾子昕因見該份工作確認單上有謝奇宏之簽名,因陷於錯誤而信韓金虎確實有於110年9月1日到班工作,准發薪資新臺幣(下同)2,000元給韓金虎,足生損害於采豐公司。嗣曾子昕於110年9月3日上午11時20分許,發現工作確認單點交數量有異,遂報警處理,始悉上情。二、案經曾子昕訴由臺北市政府警察局萬華分局報告偵辦。' } res = nlp(QA_input) ``` ## Authors **Kei Yu Heish:** iove22@hotmail.com **Yao-Chung Fan:** yfan@nchu.edu.tw ## About us [中興大學自然語言處理實驗室](https://nlpnchu.org/)研究方向圍繞於深度學習技術在文字資料探勘 (Text Mining) 與自然語言處理 (Natural Language Processing) 方面之研究,目前實驗室成員的研究主題著重於機器閱讀理解 (Machine Reading Comprehension) 以及自然語言生成 (Natural Language Generation) 兩面向。 ## More Information <p>For more info about Nchu NLP Lab, visit our <strong><a href="https://demo.nlpnchu.org/">Lab Online Demo</a></strong> repo and <strong><a href="https://github.com/NCHU-NLP-Lab">GitHub</a></strong>.
biu-nlp/lingmess-coref
biu-nlp
2022-10-26T08:55:32Z
3,558
10
transformers
[ "transformers", "pytorch", "longformer", "coreference-resolution", "en", "dataset:ontonotes", "arxiv:2205.12644", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-06-09T19:05:32Z
--- language: - en tags: - coreference-resolution license: mit datasets: - ontonotes metrics: - CoNLL task_categories: - coreference-resolution model-index: - name: biu-nlp/lingmess-coref results: - task: type: coreference-resolution name: coreference-resolution dataset: name: ontonotes type: coreference metrics: - name: Avg. F1 type: CoNLL value: 81.4 --- ## LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution [LingMess](https://arxiv.org/abs/2205.12644) is a linguistically motivated categorization of mention-pairs into 6 types of coreference decisions and learn a dedicated trainable scoring function for each category. This significantly improves the accuracy of the pairwise scorer as well as of the overall coreference performance on the English Ontonotes coreference corpus. Please check the [official repository](https://github.com/shon-otmazgin/lingmess-coref) for more details and updates. #### Training on OntoNotes We present the test results on OntoNotes 5.0 dataset. | Model | Avg. F1 | |---------------------------------|---------| | SpanBERT-large + e2e | 79.6 | | Longformer-large + s2e | 80.3 | | **Longformer-large + LingMess** | 81.4 | ### Citation If you find LingMess useful for your work, please cite the following paper: ``` latex @misc{https://doi.org/10.48550/arxiv.2205.12644, doi = {10.48550/ARXIV.2205.12644}, url = {https://arxiv.org/abs/2205.12644}, author = {Otmazgin, Shon and Cattan, Arie and Goldberg, Yoav}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
philadelphiacredit/Credit-Repair-Philadelphia
philadelphiacredit
2022-10-26T08:34:03Z
0
0
null
[ "region:us" ]
null
2022-10-26T08:32:38Z
We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable. We offer FREE consultations, evaluations, and credit education. Our process only takes 30-60 days and we offer a 100% MONEY-BACK GUARANTEE on almost all our services. Follow this [link](https://philadelphia.asapcreditrepairusa.com/)
TaoH/st-norms2
TaoH
2022-10-26T08:22:56Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-26T08:13:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 765 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 765, "warmup_steps": 77, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GV05/distilbert-base-uncased-finetuned-emotion
GV05
2022-10-26T07:56:46Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T07:18:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9244695413548749 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2144 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8227 | 1.0 | 250 | 0.3150 | 0.902 | 0.8992 | | 0.246 | 2.0 | 500 | 0.2144 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
sd-concepts-library/alicebeta
sd-concepts-library
2022-10-26T07:44:34Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-10-26T07:44:30Z
--- license: mit --- ### AliceBeta on Stable Diffusion This is the `<Alice-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Alice-style> 0](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/0.jpeg) ![<Alice-style> 1](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/1.jpeg) ![<Alice-style> 2](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/2.jpeg) ![<Alice-style> 3](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/3.jpeg) ![<Alice-style> 4](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/4.jpeg)
doodlevelyn/bert-base-cased
doodlevelyn
2022-10-26T07:29:26Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-26T02:32:02Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4145 - Precision: 0.4029 - Recall: 0.2740 - F1: 0.3262 - Accuracy: 0.9602 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0002 | 1.0 | 7365 | 0.3903 | 0.4151 | 0.2241 | 0.2911 | 0.9574 | | 0.0003 | 2.0 | 14730 | 0.4288 | 0.3681 | 0.2006 | 0.2597 | 0.9580 | | 0.0 | 3.0 | 22095 | 0.4145 | 0.4029 | 0.2740 | 0.3262 | 0.9602 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
LejlaKantar/ORGO
LejlaKantar
2022-10-26T07:21:13Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-10-26T07:21:13Z
--- license: bigscience-bloom-rail-1.0 ---
sania-nawaz/finetuning-sentiment-model-3000-samples
sania-nawaz
2022-10-26T06:15:45Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T06:04:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3286 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
debbiesoon/bart_large_summarise_v2
debbiesoon
2022-10-26T05:22:32Z
8
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:multi_news", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-22T16:30:50Z
--- license: mit tags: - generated_from_trainer datasets: - multi_news metrics: - rouge model-index: - name: bart_large_summarise_v2 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: multi_news type: multi_news config: default split: train args: default metrics: - name: Rouge1 type: rouge value: 39.305 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_large_summarise_v2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset. It achieves the following results on the evaluation set: - Loss: 4.2988 - Rouge1: 39.305 - Rouge2: 13.4171 - Rougel: 20.4214 - Rougelsum: 34.971 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.2.dev0 - Tokenizers 0.13.1
huggingtweets/kubiekit
huggingtweets
2022-10-26T05:03:03Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T04:57:38Z
--- language: en thumbnail: http://www.huggingtweets.com/kubiekit/1666760547210/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1581568862616662016/XxeL1VBT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">kubie</div> <div style="text-align: center; font-size: 14px;">@kubiekit</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from kubie. | Data | kubie | | --- | --- | | Tweets downloaded | 3136 | | Retweets | 180 | | Short tweets | 611 | | Tweets kept | 2345 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mv38hcu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kubiekit's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uk7te5z) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uk7te5z/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kubiekit') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
JamesH/Movie_review_sentiment_analysis_model
JamesH
2022-10-26T01:02:13Z
9
1
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:JamesH/autotrain-data-third-project", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-26T00:58:53Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - JamesH/autotrain-data-third-project co2_eq_emissions: emissions: 6.9919208994196795 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1883864250 - CO2 Emissions (in grams): 6.9919 ## Validation Metrics - Loss: 0.175 - Accuracy: 0.950 - Precision: 0.950 - Recall: 0.950 - AUC: 0.986 - F1: 0.950 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/JamesH/autotrain-third-project-1883864250 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("JamesH/autotrain-third-project-1883864250", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("JamesH/autotrain-third-project-1883864250", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/tommyinnit
huggingtweets
2022-10-26T00:11:06Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tommyinnit/1666743061515/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1535706274049957888/4PfG6S0y_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">TommyInnit</div> <div style="text-align: center; font-size: 14px;">@tommyinnit</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from TommyInnit. | Data | TommyInnit | | --- | --- | | Tweets downloaded | 3213 | | Retweets | 2 | | Short tweets | 464 | | Tweets kept | 2747 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/376p2x9n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tommyinnit's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2w3jxzqd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2w3jxzqd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tommyinnit') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
yohein/distilbert-base-uncased-finetuned-squad
yohein
2022-10-25T23:42:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-25T22:51:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2264 | 1.0 | 5533 | 1.1663 | | 0.9606 | 2.0 | 11066 | 1.1288 | | 0.7432 | 3.0 | 16599 | 1.1683 | ### Framework versions - Transformers 4.23.0 - Pytorch 1.12.1 - Datasets 2.5.2 - Tokenizers 0.13.1
redevaaa/test4
redevaaa
2022-10-25T23:20:58Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:ner", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-25T22:53:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - ner metrics: - precision - recall - f1 - accuracy model-index: - name: test4 results: - task: name: Token Classification type: token-classification dataset: name: ner type: ner config: default split: train args: default metrics: - name: Precision type: precision value: 0.594855305466238 - name: Recall type: recall value: 0.6423611111111112 - name: F1 type: f1 value: 0.6176961602671119 - name: Accuracy type: accuracy value: 0.9579571605593911 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test4 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the ner dataset. It achieves the following results on the evaluation set: - Loss: 0.3100 - Precision: 0.5949 - Recall: 0.6424 - F1: 0.6177 - Accuracy: 0.9580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 418 | 0.2052 | 0.2415 | 0.2465 | 0.2440 | 0.9423 | | 0.3341 | 2.0 | 836 | 0.1816 | 0.4286 | 0.4792 | 0.4525 | 0.9513 | | 0.1296 | 3.0 | 1254 | 0.2039 | 0.4589 | 0.5035 | 0.4801 | 0.9526 | | 0.0727 | 4.0 | 1672 | 0.2130 | 0.5237 | 0.5764 | 0.5488 | 0.9566 | | 0.0553 | 5.0 | 2090 | 0.2290 | 0.5171 | 0.5764 | 0.5452 | 0.9551 | | 0.0412 | 6.0 | 2508 | 0.2351 | 0.5390 | 0.5521 | 0.5455 | 0.9555 | | 0.0412 | 7.0 | 2926 | 0.2431 | 0.5280 | 0.5903 | 0.5574 | 0.9542 | | 0.0321 | 8.0 | 3344 | 0.2490 | 0.5825 | 0.625 | 0.6030 | 0.9570 | | 0.0249 | 9.0 | 3762 | 0.2679 | 0.5764 | 0.5764 | 0.5764 | 0.9573 | | 0.0192 | 10.0 | 4180 | 0.2574 | 0.5506 | 0.6042 | 0.5762 | 0.9558 | | 0.0206 | 11.0 | 4598 | 0.2857 | 0.5498 | 0.5938 | 0.5710 | 0.9559 | | 0.0147 | 12.0 | 5016 | 0.2638 | 0.5548 | 0.5972 | 0.5753 | 0.9550 | | 0.0147 | 13.0 | 5434 | 0.2771 | 0.5677 | 0.5972 | 0.5821 | 0.9577 | | 0.0129 | 14.0 | 5852 | 0.3016 | 0.5761 | 0.6181 | 0.5963 | 0.9549 | | 0.0118 | 15.0 | 6270 | 0.3055 | 0.5587 | 0.6111 | 0.5837 | 0.9570 | | 0.0099 | 16.0 | 6688 | 0.2937 | 0.5682 | 0.6076 | 0.5872 | 0.9564 | | 0.0099 | 17.0 | 7106 | 0.3075 | 0.5313 | 0.6181 | 0.5714 | 0.9531 | | 0.0085 | 18.0 | 7524 | 0.3079 | 0.6026 | 0.6424 | 0.6218 | 0.9580 | | 0.0085 | 19.0 | 7942 | 0.3082 | 0.5833 | 0.6319 | 0.6067 | 0.9572 | | 0.0074 | 20.0 | 8360 | 0.3100 | 0.5949 | 0.6424 | 0.6177 | 0.9580 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
vumichien/trillsson3-ft-keyword-spotting-14
vumichien
2022-10-25T22:36:41Z
12
0
transformers
[ "transformers", "pytorch", "trillsson_efficient", "text-classification", "audio-classification", "generated_from_trainer", "dataset:superb", "autotrain_compatible", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-25T14:40:22Z
--- tags: - audio-classification - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: trillsson3-ft-keyword-spotting-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trillsson3-ft-keyword-spotting-14 This model is a fine-tuned version of [vumichien/nonsemantic-speech-trillsson3](https://huggingface.co/vumichien/nonsemantic-speech-trillsson3) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.3015 - Accuracy: 0.9150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 64 - seed: 0 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2824 | 1.0 | 1597 | 0.7818 | 0.6892 | | 0.8003 | 2.0 | 3194 | 0.4443 | 0.8735 | | 0.7232 | 3.0 | 4791 | 0.3728 | 0.8833 | | 0.73 | 4.0 | 6388 | 0.3465 | 0.8973 | | 0.7015 | 5.0 | 7985 | 0.3211 | 0.9109 | | 0.6981 | 6.0 | 9582 | 0.3200 | 0.9081 | | 0.6807 | 7.0 | 11179 | 0.3209 | 0.9059 | | 0.6873 | 8.0 | 12776 | 0.3206 | 0.9022 | | 0.6416 | 9.0 | 14373 | 0.3124 | 0.9057 | | 0.6698 | 10.0 | 15970 | 0.3288 | 0.8950 | | 0.716 | 11.0 | 17567 | 0.3147 | 0.8998 | | 0.6514 | 12.0 | 19164 | 0.3034 | 0.9112 | | 0.6513 | 13.0 | 20761 | 0.3091 | 0.9092 | | 0.652 | 14.0 | 22358 | 0.3056 | 0.9100 | | 0.7105 | 15.0 | 23955 | 0.3015 | 0.9150 | | 0.6337 | 16.0 | 25552 | 0.3070 | 0.9091 | | 0.63 | 17.0 | 27149 | 0.3018 | 0.9135 | | 0.6672 | 18.0 | 28746 | 0.3084 | 0.9088 | | 0.6479 | 19.0 | 30343 | 0.3060 | 0.9101 | | 0.6658 | 20.0 | 31940 | 0.3072 | 0.9089 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/ok_0s
huggingtweets
2022-10-25T20:20:47Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-25T20:18:48Z
--- language: en thumbnail: http://www.huggingtweets.com/ok_0s/1666729242111/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1575869051850612737/Hz2LIceC_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">⓪𝕊 is minting Youts</div> <div style="text-align: center; font-size: 14px;">@ok_0s</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ⓪𝕊 is minting Youts. | Data | ⓪𝕊 is minting Youts | | --- | --- | | Tweets downloaded | 1390 | | Retweets | 132 | | Short tweets | 287 | | Tweets kept | 971 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11ejsejg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ok_0s's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1z3prl6a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1z3prl6a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ok_0s') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)