modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 00:37:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 00:35:18
card
stringlengths
11
1.01M
pig4431/Sentiment140_BERT_5E
pig4431
2022-11-07T08:46:38Z
10
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T08:39:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_BERT_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_BERT_5E This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.7061 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6882 | 0.08 | 50 | 0.6047 | 0.7 | | 0.6223 | 0.16 | 100 | 0.5137 | 0.8067 | | 0.5463 | 0.24 | 150 | 0.4573 | 0.8067 | | 0.4922 | 0.32 | 200 | 0.4790 | 0.8 | | 0.4821 | 0.4 | 250 | 0.4207 | 0.8267 | | 0.4985 | 0.48 | 300 | 0.4267 | 0.8067 | | 0.4455 | 0.56 | 350 | 0.4301 | 0.8133 | | 0.469 | 0.64 | 400 | 0.4294 | 0.82 | | 0.4906 | 0.72 | 450 | 0.4059 | 0.8067 | | 0.4006 | 0.8 | 500 | 0.4181 | 0.8133 | | 0.445 | 0.88 | 550 | 0.3948 | 0.8267 | | 0.4302 | 0.96 | 600 | 0.3976 | 0.84 | | 0.4442 | 1.04 | 650 | 0.3887 | 0.8533 | | 0.3424 | 1.12 | 700 | 0.4119 | 0.8267 | | 0.3589 | 1.2 | 750 | 0.4083 | 0.8533 | | 0.3737 | 1.28 | 800 | 0.4253 | 0.8333 | | 0.334 | 1.36 | 850 | 0.4147 | 0.86 | | 0.3637 | 1.44 | 900 | 0.3926 | 0.8533 | | 0.3388 | 1.52 | 950 | 0.4084 | 0.8267 | | 0.3375 | 1.6 | 1000 | 0.4132 | 0.8467 | | 0.3725 | 1.68 | 1050 | 0.3965 | 0.8467 | | 0.3649 | 1.76 | 1100 | 0.3956 | 0.8333 | | 0.3799 | 1.84 | 1150 | 0.3923 | 0.8333 | | 0.3695 | 1.92 | 1200 | 0.4266 | 0.84 | | 0.3233 | 2.0 | 1250 | 0.4225 | 0.8333 | | 0.2313 | 2.08 | 1300 | 0.4672 | 0.8333 | | 0.231 | 2.16 | 1350 | 0.5212 | 0.8133 | | 0.2526 | 2.24 | 1400 | 0.5392 | 0.8067 | | 0.2721 | 2.32 | 1450 | 0.4895 | 0.82 | | 0.2141 | 2.4 | 1500 | 0.5258 | 0.8133 | | 0.2658 | 2.48 | 1550 | 0.5046 | 0.8267 | | 0.2386 | 2.56 | 1600 | 0.4873 | 0.8267 | | 0.2493 | 2.64 | 1650 | 0.4950 | 0.8333 | | 0.2692 | 2.72 | 1700 | 0.5080 | 0.8267 | | 0.2226 | 2.8 | 1750 | 0.5016 | 0.8467 | | 0.2522 | 2.88 | 1800 | 0.5068 | 0.8267 | | 0.2556 | 2.96 | 1850 | 0.4937 | 0.8267 | | 0.2311 | 3.04 | 1900 | 0.5103 | 0.8267 | | 0.1703 | 3.12 | 1950 | 0.5680 | 0.82 | | 0.1744 | 3.2 | 2000 | 0.5501 | 0.82 | | 0.1667 | 3.28 | 2050 | 0.6142 | 0.82 | | 0.1863 | 3.36 | 2100 | 0.6355 | 0.82 | | 0.2543 | 3.44 | 2150 | 0.6000 | 0.8133 | | 0.1565 | 3.52 | 2200 | 0.6618 | 0.8267 | | 0.1531 | 3.6 | 2250 | 0.6595 | 0.8133 | | 0.1915 | 3.68 | 2300 | 0.6647 | 0.8267 | | 0.1601 | 3.76 | 2350 | 0.6729 | 0.8267 | | 0.176 | 3.84 | 2400 | 0.6699 | 0.82 | | 0.1815 | 3.92 | 2450 | 0.6819 | 0.8067 | | 0.1987 | 4.0 | 2500 | 0.6543 | 0.8333 | | 0.1236 | 4.08 | 2550 | 0.6686 | 0.8333 | | 0.1599 | 4.16 | 2600 | 0.6583 | 0.8267 | | 0.1256 | 4.24 | 2650 | 0.6871 | 0.8267 | | 0.1291 | 4.32 | 2700 | 0.6855 | 0.82 | | 0.1198 | 4.4 | 2750 | 0.6901 | 0.82 | | 0.1245 | 4.48 | 2800 | 0.7152 | 0.8267 | | 0.1784 | 4.56 | 2850 | 0.7053 | 0.82 | | 0.1705 | 4.64 | 2900 | 0.7016 | 0.82 | | 0.1265 | 4.72 | 2950 | 0.7013 | 0.82 | | 0.1192 | 4.8 | 3000 | 0.7084 | 0.82 | | 0.174 | 4.88 | 3050 | 0.7062 | 0.82 | | 0.1328 | 4.96 | 3100 | 0.7061 | 0.82 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
bofenghuang/wav2vec2-xls-r-1b-cv9-fr
bofenghuang
2022-11-07T08:37:59Z
8
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_9_0", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "fr", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_9_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-12T13:09:54Z
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_9_0 - generated_from_trainer - hf-asr-leaderboard - robust-speech-event datasets: - common_voice - mozilla-foundation/common_voice_9_0 model-index: - name: Fine-tuned Wav2Vec2 XLS-R 1B model for ASR in French results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 9 type: mozilla-foundation/common_voice_9_0 args: fr metrics: - name: Test WER type: wer value: 12.72 - name: Test WER (+LM) type: wer value: 10.60 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Test WER type: wer value: 24.28 - name: Test WER (+LM) type: wer value: 20.85 --- # Fine-tuned Wav2Vec2 XLS-R 1B model for ASR in French This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - FR dataset. ## Usage 1. To use on a local audio file without the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("bhuang/wav2vec2-xls-r-1b-cv9-fr") model = AutoModelForCTC.from_pretrained("bhuang/wav2vec2-xls-r-1b-cv9-fr").cuda() # path to your audio file wav_path = "example.wav" waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != 16_000: resampler = torchaudio.transforms.Resample(sample_rate, 16_000) waveform = resampler(waveform) # normalize input_dict = processor(waveform, sampling_rate=16_000, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to("cuda")).logits # decode predicted_ids = torch.argmax(logits, dim=-1) predicted_sentence = processor.batch_decode(predicted_ids)[0] ``` 2. To use on a local audio file with the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("bhuang/wav2vec2-xls-r-1b-cv9-fr") model = AutoModelForCTC.from_pretrained("bhuang/wav2vec2-xls-r-1b-cv9-fr").cuda() model_sampling_rate = processor_with_lm.feature_extractor.sampling_rate # path to your audio file wav_path = "example.wav" waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != 16_000: resampler = torchaudio.transforms.Resample(sample_rate, 16_000) waveform = resampler(waveform) # normalize input_dict = processor_with_lm(waveform, sampling_rate=16_000, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to("cuda")).logits predicted_sentence = processor_with_lm.batch_decode(logits.cpu().numpy()).text[0] ``` ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_9_0` ```bash python eval.py \ --model_id "bhuang/wav2vec2-xls-r-1b-cv9-fr" \ --dataset "mozilla-foundation/common_voice_9_0" \ --config "fr" \ --split "test" \ --log_outputs \ --outdir "outputs/results_mozilla-foundatio_common_voice_9_0_with_lm" ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py \ --model_id "bhuang/wav2vec2-xls-r-1b-cv9-fr" \ --dataset "speech-recognition-community-v2/dev_data" \ --config "fr" \ --split "validation" \ --chunk_length_s 5.0 \ --stride_length_s 1.0 \ --log_outputs \ --outdir "outputs/results_speech-recognition-community-v2_dev_data_with_lm" ```
ahmadRa/q-Taxi-v3-try1
ahmadRa
2022-11-07T08:18:35Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T08:18:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-try1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahmadRa/q-Taxi-v3-try1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
cynthiachan/finetuned-bert-base
cynthiachan
2022-11-07T07:56:55Z
12
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:cynthiachan/FeedRef2022", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T06:51:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cynthiachan/FeedRef2022 model-index: - name: training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the cynthiachan/FeedRef2022 dataset. It achieves the following results on the evaluation set: - Loss: 0.0514 - Attackid Precision: 0.8889 - Attackid Recall: 0.9231 - Attackid F1: 0.9057 - Attackid Number: 52 - Bitcoinaddr Precision: 0.875 - Bitcoinaddr Recall: 1.0 - Bitcoinaddr F1: 0.9333 - Bitcoinaddr Number: 7 - Cve Precision: 0.8378 - Cve Recall: 0.9538 - Cve F1: 0.8921 - Cve Number: 65 - Defenderthreat Precision: 0.875 - Defenderthreat Recall: 1.0 - Defenderthreat F1: 0.9333 - Defenderthreat Number: 7 - Domain Precision: 0.9279 - Domain Recall: 0.9369 - Domain F1: 0.9324 - Domain Number: 206 - Email Precision: 0.8333 - Email Recall: 0.9302 - Email F1: 0.8791 - Email Number: 43 - Filepath Precision: 0.8857 - Filepath Recall: 0.9195 - Filepath F1: 0.9023 - Filepath Number: 1652 - Fingerprint Precision: 0.0 - Fingerprint Recall: 0.0 - Fingerprint F1: 0.0 - Fingerprint Number: 2 - Hostname Precision: 0.8910 - Hostname Recall: 0.9653 - Hostname F1: 0.9267 - Hostname Number: 144 - Ipv4 Precision: 0.9767 - Ipv4 Recall: 0.9825 - Ipv4 F1: 0.9796 - Ipv4 Number: 171 - Ipv6 Precision: 0.3333 - Ipv6 Recall: 1.0 - Ipv6 F1: 0.5 - Ipv6 Number: 3 - Md5 Precision: 0.9141 - Md5 Recall: 0.9857 - Md5 F1: 0.9486 - Md5 Number: 421 - Sha1 Precision: 0.8545 - Sha1 Recall: 0.9592 - Sha1 F1: 0.9038 - Sha1 Number: 49 - Sha256 Precision: 0.9120 - Sha256 Recall: 0.9919 - Sha256 F1: 0.9502 - Sha256 Number: 491 - Uri Precision: 0.3333 - Uri Recall: 0.4545 - Uri F1: 0.3846 - Uri Number: 11 - Overall Precision: 0.8946 - Overall Recall: 0.9446 - Overall F1: 0.9189 - Overall Accuracy: 0.9886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Bitcoinaddr Precision | Bitcoinaddr Recall | Bitcoinaddr F1 | Bitcoinaddr Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Fingerprint Precision | Fingerprint Recall | Fingerprint F1 | Fingerprint Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Ipv6 Precision | Ipv6 Recall | Ipv6 F1 | Ipv6 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3691 | 0.04 | 500 | 0.3054 | 0.0 | 0.0 | 0.0 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.0 | 0.0 | 0.0 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.0 | 0.0 | 0.0 | 206 | 0.0 | 0.0 | 0.0 | 43 | 0.1917 | 0.5975 | 0.2903 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 144 | 0.5747 | 0.5848 | 0.5797 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.4160 | 0.7648 | 0.5389 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.5131 | 0.9145 | 0.6574 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.2665 | 0.5590 | 0.3610 | 0.9297 | | 0.2388 | 0.07 | 1000 | 0.2124 | 0.0 | 0.0 | 0.0 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7467 | 0.8615 | 0.8 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.0 | 0.0 | 0.0 | 206 | 0.0 | 0.0 | 0.0 | 43 | 0.3846 | 0.4661 | 0.4215 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.3534 | 0.6528 | 0.4585 | 144 | 0.6667 | 0.5614 | 0.6095 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.5275 | 0.9097 | 0.6678 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.8787 | 0.9002 | 0.8893 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.4932 | 0.5539 | 0.5218 | 0.9491 | | 0.1817 | 0.11 | 1500 | 0.2025 | 0.4433 | 0.8269 | 0.5772 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7941 | 0.8308 | 0.8120 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.2241 | 0.6602 | 0.3346 | 206 | 0.1538 | 0.2326 | 0.1852 | 43 | 0.4561 | 0.6816 | 0.5465 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.0042 | 0.0069 | 0.0052 | 144 | 0.6522 | 0.7018 | 0.6761 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.5671 | 0.8527 | 0.6812 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.7623 | 0.9470 | 0.8447 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.4654 | 0.6961 | 0.5579 | 0.9563 | | 0.1552 | 0.15 | 2000 | 0.1581 | 0.6119 | 0.7885 | 0.6891 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8235 | 0.8615 | 0.8421 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.4979 | 0.5680 | 0.5306 | 206 | 0.4795 | 0.8140 | 0.6034 | 43 | 0.4876 | 0.7960 | 0.6047 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.5682 | 0.6944 | 0.625 | 144 | 0.4692 | 0.8012 | 0.5918 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.5321 | 0.9240 | 0.6753 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.7951 | 0.9328 | 0.8585 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.5345 | 0.7966 | 0.6398 | 0.9622 | | 0.1567 | 0.19 | 2500 | 0.1619 | 0.6032 | 0.7308 | 0.6609 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8133 | 0.9385 | 0.8714 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6257 | 0.5680 | 0.5954 | 206 | 0.1379 | 0.1860 | 0.1584 | 43 | 0.5788 | 0.7512 | 0.6538 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.4981 | 0.9097 | 0.6437 | 144 | 0.7233 | 0.8713 | 0.7905 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7723 | 0.9264 | 0.8423 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.7523 | 0.9837 | 0.8526 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6308 | 0.7876 | 0.7006 | 0.9628 | | 0.1588 | 0.22 | 3000 | 0.1409 | 0.4050 | 0.9423 | 0.5665 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.5962 | 0.9538 | 0.7337 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6805 | 0.7961 | 0.7338 | 206 | 0.5821 | 0.9070 | 0.7091 | 43 | 0.6291 | 0.7712 | 0.6930 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6902 | 0.8819 | 0.7744 | 144 | 0.5737 | 0.8421 | 0.6825 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.5678 | 0.9454 | 0.7094 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.8582 | 0.9735 | 0.9122 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6300 | 0.8228 | 0.7136 | 0.9664 | | 0.1257 | 0.26 | 3500 | 0.1417 | 0.5541 | 0.7885 | 0.6508 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6854 | 0.9385 | 0.7922 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6828 | 0.7524 | 0.7159 | 206 | 0.5217 | 0.8372 | 0.6429 | 43 | 0.6314 | 0.7155 | 0.6708 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.5261 | 0.9097 | 0.6667 | 144 | 0.7562 | 0.8889 | 0.8172 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7435 | 0.9501 | 0.8342 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.7325 | 0.9817 | 0.8390 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6627 | 0.7942 | 0.7225 | 0.9658 | | 0.1229 | 0.3 | 4000 | 0.1455 | 0.6567 | 0.8462 | 0.7395 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7391 | 0.7846 | 0.7612 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6858 | 0.7524 | 0.7176 | 206 | 0.4321 | 0.8140 | 0.5645 | 43 | 0.6740 | 0.7809 | 0.7235 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6452 | 0.8333 | 0.7273 | 144 | 0.5455 | 0.5614 | 0.5533 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7697 | 0.8575 | 0.8112 | 421 | 0.3645 | 0.7959 | 0.5 | 49 | 0.6948 | 0.9735 | 0.8109 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6684 | 0.8029 | 0.7295 | 0.9667 | | 0.1323 | 0.34 | 4500 | 0.1323 | 0.6719 | 0.8269 | 0.7414 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7910 | 0.8154 | 0.8030 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6064 | 0.7330 | 0.6637 | 206 | 0.74 | 0.8605 | 0.7957 | 43 | 0.6802 | 0.7391 | 0.7084 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.5935 | 0.5069 | 0.5468 | 144 | 0.7826 | 0.7368 | 0.7590 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7783 | 0.8171 | 0.7972 | 421 | 0.3810 | 0.8163 | 0.5195 | 49 | 0.8368 | 0.9715 | 0.8992 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7049 | 0.7717 | 0.7368 | 0.9680 | | 0.1379 | 0.37 | 5000 | 0.1088 | 0.5930 | 0.9808 | 0.7391 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.725 | 0.8923 | 0.8 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7619 | 0.6990 | 0.7291 | 206 | 0.5556 | 0.9302 | 0.6957 | 43 | 0.6551 | 0.8360 | 0.7346 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7127 | 0.8958 | 0.7938 | 144 | 0.7989 | 0.8596 | 0.8282 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7665 | 0.9359 | 0.8428 | 421 | 0.3729 | 0.4490 | 0.4074 | 49 | 0.7278 | 0.9695 | 0.8314 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6886 | 0.8550 | 0.7629 | 0.9738 | | 0.1162 | 0.41 | 5500 | 0.1205 | 0.5765 | 0.9423 | 0.7153 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8026 | 0.9385 | 0.8652 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7960 | 0.7767 | 0.7862 | 206 | 0.6032 | 0.8837 | 0.7170 | 43 | 0.6724 | 0.8099 | 0.7348 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6791 | 0.8819 | 0.7674 | 144 | 0.8041 | 0.9123 | 0.8548 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7188 | 0.9287 | 0.8104 | 421 | 0.5714 | 0.8163 | 0.6723 | 49 | 0.8088 | 0.9735 | 0.8835 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7033 | 0.8538 | 0.7713 | 0.9711 | | 0.1128 | 0.45 | 6000 | 0.1165 | 0.6575 | 0.9231 | 0.768 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7143 | 0.9231 | 0.8054 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7703 | 0.7816 | 0.7759 | 206 | 0.6724 | 0.9070 | 0.7723 | 43 | 0.6634 | 0.7706 | 0.7130 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6580 | 0.8819 | 0.7537 | 144 | 0.8434 | 0.8187 | 0.8309 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8032 | 0.9596 | 0.8745 | 421 | 0.6066 | 0.7551 | 0.6727 | 49 | 0.8554 | 0.9756 | 0.9115 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7201 | 0.8327 | 0.7723 | 0.9736 | | 0.11 | 0.49 | 6500 | 0.1374 | 0.7167 | 0.8269 | 0.7679 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7273 | 0.8615 | 0.7887 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7592 | 0.7039 | 0.7305 | 206 | 0.725 | 0.6744 | 0.6988 | 43 | 0.6129 | 0.7524 | 0.6755 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7151 | 0.8542 | 0.7785 | 144 | 0.7919 | 0.8012 | 0.7965 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7821 | 0.9549 | 0.8599 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.6880 | 0.9837 | 0.8097 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.6710 | 0.8005 | 0.7300 | 0.9680 | | 0.1152 | 0.52 | 7000 | 0.1152 | 0.6933 | 1.0 | 0.8189 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6374 | 0.8923 | 0.7436 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6103 | 0.6311 | 0.6205 | 206 | 0.6739 | 0.7209 | 0.6966 | 43 | 0.6969 | 0.7960 | 0.7431 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7721 | 0.7292 | 0.75 | 144 | 0.8526 | 0.7778 | 0.8135 | 171 | 0.0192 | 0.3333 | 0.0364 | 3 | 0.8549 | 0.9097 | 0.8815 | 421 | 0.4706 | 0.8163 | 0.5970 | 49 | 0.8625 | 0.9837 | 0.9191 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7271 | 0.8216 | 0.7715 | 0.9722 | | 0.1084 | 0.56 | 7500 | 0.1073 | 0.75 | 0.8077 | 0.7778 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6484 | 0.9077 | 0.7564 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7313 | 0.8058 | 0.7667 | 206 | 0.6452 | 0.9302 | 0.7619 | 43 | 0.6933 | 0.8196 | 0.7512 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6818 | 0.9375 | 0.7895 | 144 | 0.6872 | 0.9123 | 0.7839 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8789 | 0.9477 | 0.9120 | 421 | 0.7451 | 0.7755 | 0.76 | 49 | 0.8374 | 0.9857 | 0.9055 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7277 | 0.8643 | 0.7902 | 0.9741 | | 0.0789 | 0.6 | 8000 | 0.0958 | 0.7719 | 0.8462 | 0.8073 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7403 | 0.8769 | 0.8028 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7731 | 0.8107 | 0.7915 | 206 | 0.74 | 0.8605 | 0.7957 | 43 | 0.7408 | 0.7924 | 0.7657 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6749 | 0.9514 | 0.7896 | 144 | 0.8011 | 0.8480 | 0.8239 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8299 | 0.9620 | 0.8911 | 421 | 0.5686 | 0.5918 | 0.58 | 49 | 0.8770 | 0.9878 | 0.9291 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7700 | 0.8469 | 0.8066 | 0.9760 | | 0.1149 | 0.64 | 8500 | 0.1334 | 1.0 | 0.7692 | 0.8696 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6795 | 0.8154 | 0.7413 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7336 | 0.7621 | 0.7476 | 206 | 0.3824 | 0.6047 | 0.4685 | 43 | 0.6318 | 0.5454 | 0.5854 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8227 | 0.8056 | 0.8140 | 144 | 0.7707 | 0.7076 | 0.7378 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8776 | 0.9026 | 0.8899 | 421 | 0.6129 | 0.7755 | 0.6847 | 49 | 0.8339 | 0.9817 | 0.9018 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7231 | 0.6961 | 0.7094 | 0.9673 | | 0.1155 | 0.67 | 9000 | 0.1052 | 0.6267 | 0.9038 | 0.7402 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7294 | 0.9538 | 0.8267 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7232 | 0.7864 | 0.7535 | 206 | 0.7391 | 0.7907 | 0.7640 | 43 | 0.7494 | 0.7312 | 0.7402 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7531 | 0.8472 | 0.7974 | 144 | 0.8708 | 0.9064 | 0.8883 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8340 | 0.9667 | 0.8955 | 421 | 0.5714 | 0.5714 | 0.5714 | 49 | 0.8709 | 0.9756 | 0.9203 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7664 | 0.8135 | 0.7893 | 0.9742 | | 0.0926 | 0.71 | 9500 | 0.1048 | 0.6438 | 0.9038 | 0.752 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6610 | 0.6 | 0.6290 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7541 | 0.6699 | 0.7095 | 206 | 0.7308 | 0.8837 | 0.8 | 43 | 0.6768 | 0.8456 | 0.7519 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7119 | 0.875 | 0.7850 | 144 | 0.8343 | 0.8830 | 0.8580 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8712 | 0.9477 | 0.9078 | 421 | 0.7193 | 0.8367 | 0.7736 | 49 | 0.8476 | 0.9857 | 0.9115 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7322 | 0.8604 | 0.7911 | 0.9760 | | 0.0982 | 0.75 | 10000 | 0.0985 | 0.6533 | 0.9423 | 0.7717 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7027 | 0.8 | 0.7482 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7671 | 0.8155 | 0.7906 | 206 | 0.7143 | 0.9302 | 0.8081 | 43 | 0.7465 | 0.8039 | 0.7741 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6507 | 0.9444 | 0.7705 | 144 | 0.9106 | 0.9532 | 0.9314 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8008 | 0.9264 | 0.8590 | 421 | 0.5641 | 0.8980 | 0.6929 | 49 | 0.8460 | 0.9735 | 0.9053 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7633 | 0.8568 | 0.8074 | 0.9769 | | 0.085 | 0.79 | 10500 | 0.0972 | 0.6184 | 0.9038 | 0.7344 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8154 | 0.8154 | 0.8154 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7236 | 0.8641 | 0.7876 | 206 | 0.7755 | 0.8837 | 0.8261 | 43 | 0.7544 | 0.8105 | 0.7814 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7081 | 0.9097 | 0.7964 | 144 | 0.8778 | 0.9240 | 0.9003 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8976 | 0.9572 | 0.9264 | 421 | 0.8039 | 0.8367 | 0.8200 | 49 | 0.8432 | 0.9857 | 0.9089 | 491 | 0.1111 | 0.0909 | 0.1000 | 11 | 0.7852 | 0.8643 | 0.8229 | 0.9779 | | 0.0981 | 0.82 | 11000 | 0.1092 | 0.6944 | 0.9615 | 0.8065 | 52 | 0.2 | 0.1429 | 0.1667 | 7 | 0.7262 | 0.9385 | 0.8188 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6842 | 0.8835 | 0.7712 | 206 | 0.6667 | 0.7907 | 0.7234 | 43 | 0.7117 | 0.8251 | 0.7642 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7159 | 0.875 | 0.7875 | 144 | 0.9337 | 0.9064 | 0.9199 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7175 | 0.9715 | 0.8254 | 421 | 0.0 | 0.0 | 0.0 | 49 | 0.8620 | 0.9796 | 0.9171 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7399 | 0.8610 | 0.7959 | 0.9737 | | 0.0892 | 0.86 | 11500 | 0.0969 | 0.6049 | 0.9423 | 0.7368 | 52 | 0.4545 | 0.7143 | 0.5556 | 7 | 0.0 | 0.0 | 0.0 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.8 | 0.8155 | 0.8077 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.6975 | 0.8571 | 0.7691 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7397 | 0.75 | 0.7448 | 144 | 0.8841 | 0.8480 | 0.8657 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8821 | 0.9596 | 0.9192 | 421 | 0.9474 | 0.7347 | 0.8276 | 49 | 0.8251 | 0.9511 | 0.8836 | 491 | 0.25 | 0.1818 | 0.2105 | 11 | 0.7557 | 0.8544 | 0.8020 | 0.9759 | | 0.0924 | 0.9 | 12000 | 0.0971 | 0.7059 | 0.9231 | 0.8000 | 52 | 0.4615 | 0.8571 | 0.6 | 7 | 0.8108 | 0.9231 | 0.8633 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7331 | 0.8932 | 0.8053 | 206 | 0.8 | 0.9302 | 0.8602 | 43 | 0.7544 | 0.8535 | 0.8009 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7697 | 0.8819 | 0.8220 | 144 | 0.8947 | 0.8947 | 0.8947 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7758 | 0.9454 | 0.8522 | 421 | 0.4516 | 0.8571 | 0.5915 | 49 | 0.8618 | 0.9776 | 0.9160 | 491 | 0.08 | 0.1818 | 0.1111 | 11 | 0.7664 | 0.8875 | 0.8225 | 0.9782 | | 0.0784 | 0.94 | 12500 | 0.1113 | 0.6623 | 0.9808 | 0.7907 | 52 | 0.6667 | 0.8571 | 0.75 | 7 | 0.8406 | 0.8923 | 0.8657 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.6865 | 0.8398 | 0.7555 | 206 | 0.7547 | 0.9302 | 0.8333 | 43 | 0.7858 | 0.7863 | 0.7861 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8026 | 0.8472 | 0.8243 | 144 | 0.8629 | 0.8830 | 0.8728 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8462 | 0.9406 | 0.8909 | 421 | 0.56 | 0.8571 | 0.6774 | 49 | 0.9119 | 0.9695 | 0.9398 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.8022 | 0.8466 | 0.8238 | 0.9774 | | 0.1063 | 0.97 | 13000 | 0.0932 | 0.6538 | 0.9808 | 0.7846 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7838 | 0.8923 | 0.8345 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7738 | 0.8301 | 0.8009 | 206 | 0.75 | 0.8372 | 0.7912 | 43 | 0.6979 | 0.8529 | 0.7676 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7086 | 0.8611 | 0.7774 | 144 | 0.8703 | 0.9415 | 0.9045 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.6184 | 0.8931 | 0.7308 | 421 | 0.2424 | 0.1633 | 0.1951 | 49 | 0.8511 | 0.9776 | 0.9100 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7176 | 0.8646 | 0.7843 | 0.9760 | | 0.0765 | 1.01 | 13500 | 0.0892 | 0.6806 | 0.9423 | 0.7903 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6714 | 0.7231 | 0.6963 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.8416 | 0.8252 | 0.8333 | 206 | 0.7917 | 0.8837 | 0.8352 | 43 | 0.7330 | 0.8559 | 0.7897 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7105 | 0.9375 | 0.8084 | 144 | 0.8757 | 0.9474 | 0.9101 | 171 | 0.125 | 1.0 | 0.2222 | 3 | 0.8769 | 0.9810 | 0.9260 | 421 | 0.5970 | 0.8163 | 0.6897 | 49 | 0.8761 | 0.9796 | 0.925 | 491 | 0.0 | 0.0 | 0.0 | 11 | 0.7696 | 0.8881 | 0.8246 | 0.9790 | | 0.0677 | 1.05 | 14000 | 0.0804 | 0.6667 | 0.9231 | 0.7742 | 52 | 0.3333 | 0.7143 | 0.4545 | 7 | 0.7941 | 0.8308 | 0.8120 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.8112 | 0.7718 | 0.7910 | 206 | 0.7234 | 0.7907 | 0.7556 | 43 | 0.7725 | 0.8487 | 0.8088 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7949 | 0.8611 | 0.8267 | 144 | 0.9401 | 0.9181 | 0.9290 | 171 | 0.1765 | 1.0 | 0.3 | 3 | 0.8613 | 0.9739 | 0.9142 | 421 | 0.4868 | 0.7551 | 0.592 | 49 | 0.8881 | 0.9857 | 0.9344 | 491 | 0.2222 | 0.1818 | 0.2000 | 11 | 0.7978 | 0.8782 | 0.8360 | 0.9805 | | 0.0544 | 1.09 | 14500 | 0.0924 | 0.9216 | 0.9038 | 0.9126 | 52 | 0.1875 | 0.4286 | 0.2609 | 7 | 0.7973 | 0.9077 | 0.8489 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.7511 | 0.8641 | 0.8036 | 206 | 0.78 | 0.9070 | 0.8387 | 43 | 0.7361 | 0.8747 | 0.7994 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.6569 | 0.9306 | 0.7701 | 144 | 0.9253 | 0.9415 | 0.9333 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.9146 | 0.9667 | 0.9400 | 421 | 0.6308 | 0.8367 | 0.7193 | 49 | 0.8121 | 0.9857 | 0.8905 | 491 | 0.0833 | 0.1818 | 0.1143 | 11 | 0.7679 | 0.9025 | 0.8298 | 0.9793 | | 0.0797 | 1.12 | 15000 | 0.0851 | 0.9057 | 0.9231 | 0.9143 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7294 | 0.9538 | 0.8267 | 65 | 0.5 | 0.5714 | 0.5333 | 7 | 0.7909 | 0.8447 | 0.8169 | 206 | 0.8125 | 0.9070 | 0.8571 | 43 | 0.8104 | 0.8432 | 0.8265 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7 | 0.8264 | 0.7580 | 144 | 0.8804 | 0.9474 | 0.9127 | 171 | 0.2222 | 0.6667 | 0.3333 | 3 | 0.8834 | 0.9359 | 0.9089 | 421 | 0.5056 | 0.9184 | 0.6522 | 49 | 0.8436 | 0.9776 | 0.9057 | 491 | 0.0625 | 0.0909 | 0.0741 | 11 | 0.8077 | 0.8794 | 0.8420 | 0.9793 | | 0.0544 | 1.16 | 15500 | 0.0905 | 0.7 | 0.9423 | 0.8033 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6421 | 0.9385 | 0.7625 | 65 | 0.25 | 0.2857 | 0.2667 | 7 | 0.8018 | 0.8447 | 0.8227 | 206 | 0.7273 | 0.9302 | 0.8163 | 43 | 0.7642 | 0.8571 | 0.8080 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8098 | 0.9167 | 0.8599 | 144 | 0.9261 | 0.9532 | 0.9395 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.6976 | 0.9810 | 0.8154 | 421 | 0.6066 | 0.7551 | 0.6727 | 49 | 0.8948 | 0.9878 | 0.9390 | 491 | 0.3636 | 0.3636 | 0.3636 | 11 | 0.7664 | 0.8953 | 0.8259 | 0.9793 | | 0.0815 | 1.2 | 16000 | 0.0799 | 0.9804 | 0.9615 | 0.9709 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6593 | 0.9231 | 0.7692 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.825 | 0.8010 | 0.8128 | 206 | 0.6667 | 0.9302 | 0.7767 | 43 | 0.7140 | 0.8523 | 0.7770 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7016 | 0.9306 | 0.8000 | 144 | 0.9096 | 0.9415 | 0.9253 | 171 | 0.3 | 1.0 | 0.4615 | 3 | 0.7203 | 0.9359 | 0.8140 | 421 | 0.3193 | 0.7755 | 0.4524 | 49 | 0.8548 | 0.9470 | 0.8986 | 491 | 0.5 | 0.4545 | 0.4762 | 11 | 0.7339 | 0.8794 | 0.8001 | 0.9780 | | 0.0647 | 1.24 | 16500 | 0.0739 | 0.8889 | 0.9231 | 0.9057 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7692 | 0.9231 | 0.8392 | 65 | 0.0 | 0.0 | 0.0 | 7 | 0.8077 | 0.8155 | 0.8116 | 206 | 0.8 | 0.9302 | 0.8602 | 43 | 0.7750 | 0.8717 | 0.8205 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8467 | 0.8819 | 0.8639 | 144 | 0.92 | 0.9415 | 0.9306 | 171 | 0.0682 | 1.0 | 0.1277 | 3 | 0.8515 | 0.9810 | 0.9117 | 421 | 0.9318 | 0.8367 | 0.8817 | 49 | 0.9120 | 0.9919 | 0.9502 | 491 | 0.1875 | 0.2727 | 0.2222 | 11 | 0.8066 | 0.8998 | 0.8507 | 0.9820 | | 0.0532 | 1.27 | 17000 | 0.0870 | 0.8491 | 0.8654 | 0.8571 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8657 | 0.8923 | 0.8788 | 65 | 0.5714 | 0.5714 | 0.5714 | 7 | 0.7404 | 0.8447 | 0.7891 | 206 | 0.8163 | 0.9302 | 0.8696 | 43 | 0.8296 | 0.8547 | 0.8420 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8217 | 0.8958 | 0.8571 | 144 | 0.8931 | 0.8304 | 0.8606 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8369 | 0.9382 | 0.8847 | 421 | 0.9574 | 0.9184 | 0.9375 | 49 | 0.9026 | 0.9817 | 0.9405 | 491 | 0.5714 | 0.3636 | 0.4444 | 11 | 0.8367 | 0.8815 | 0.8585 | 0.9810 | | 0.0673 | 1.31 | 17500 | 0.0851 | 0.8929 | 0.9615 | 0.9259 | 52 | 0.5714 | 0.5714 | 0.5714 | 7 | 0.7024 | 0.9077 | 0.7919 | 65 | 0.4 | 0.2857 | 0.3333 | 7 | 0.7817 | 0.8689 | 0.8230 | 206 | 0.7959 | 0.9070 | 0.8478 | 43 | 0.8198 | 0.8511 | 0.8352 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7738 | 0.9028 | 0.8333 | 144 | 0.9162 | 0.9591 | 0.9371 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8655 | 0.9786 | 0.9186 | 421 | 0.775 | 0.6327 | 0.6966 | 49 | 0.8377 | 0.9776 | 0.9023 | 491 | 0.2143 | 0.2727 | 0.2400 | 11 | 0.8231 | 0.8902 | 0.8553 | 0.9816 | | 0.0715 | 1.35 | 18000 | 0.0821 | 0.8868 | 0.9038 | 0.8952 | 52 | 0.1 | 1.0 | 0.1818 | 7 | 0.6778 | 0.9385 | 0.7871 | 65 | 0.8 | 0.5714 | 0.6667 | 7 | 0.7653 | 0.7913 | 0.7780 | 206 | 0.78 | 0.9070 | 0.8387 | 43 | 0.7410 | 0.8989 | 0.8124 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7458 | 0.9167 | 0.8224 | 144 | 0.8713 | 0.8713 | 0.8713 | 171 | 0.2727 | 1.0 | 0.4286 | 3 | 0.8008 | 0.9644 | 0.875 | 421 | 0.4333 | 0.7959 | 0.5612 | 49 | 0.8920 | 0.9756 | 0.9319 | 491 | 0.8333 | 0.4545 | 0.5882 | 11 | 0.7578 | 0.9082 | 0.8262 | 0.9793 | | 0.0778 | 1.39 | 18500 | 0.0661 | 0.9074 | 0.9423 | 0.9245 | 52 | 0.0714 | 0.1429 | 0.0952 | 7 | 0.8 | 0.9231 | 0.8571 | 65 | 1.0 | 0.2857 | 0.4444 | 7 | 0.8757 | 0.7864 | 0.8286 | 206 | 0.7547 | 0.9302 | 0.8333 | 43 | 0.7831 | 0.8674 | 0.8231 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8323 | 0.9306 | 0.8787 | 144 | 0.8859 | 0.9532 | 0.9183 | 171 | 0.1875 | 1.0 | 0.3158 | 3 | 0.9138 | 0.9572 | 0.9350 | 421 | 0.7963 | 0.8776 | 0.8350 | 49 | 0.8544 | 0.9919 | 0.9180 | 491 | 0.2 | 0.1818 | 0.1905 | 11 | 0.8172 | 0.8971 | 0.8553 | 0.9829 | | 0.0672 | 1.42 | 19000 | 0.0841 | 0.6538 | 0.9808 | 0.7846 | 52 | 0.2593 | 1.0 | 0.4118 | 7 | 0.6703 | 0.9385 | 0.7821 | 65 | 0.4 | 0.2857 | 0.3333 | 7 | 0.8162 | 0.7330 | 0.7724 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.7510 | 0.8747 | 0.8082 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7432 | 0.9444 | 0.8318 | 144 | 0.8477 | 0.9766 | 0.9076 | 171 | 0.1579 | 1.0 | 0.2727 | 3 | 0.8103 | 0.9739 | 0.8846 | 421 | 0.6327 | 0.6327 | 0.6327 | 49 | 0.7970 | 0.9674 | 0.8740 | 491 | 0.1190 | 0.4545 | 0.1887 | 11 | 0.7558 | 0.8977 | 0.8207 | 0.9787 | | 0.0802 | 1.46 | 19500 | 0.0682 | 0.8276 | 0.9231 | 0.8727 | 52 | 0.4615 | 0.8571 | 0.6 | 7 | 0.7468 | 0.9077 | 0.8194 | 65 | 0.3333 | 0.2857 | 0.3077 | 7 | 0.7621 | 0.8398 | 0.7991 | 206 | 0.9091 | 0.9302 | 0.9195 | 43 | 0.7958 | 0.8801 | 0.8359 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7735 | 0.9722 | 0.8615 | 144 | 0.9357 | 0.9357 | 0.9357 | 171 | 0.3333 | 1.0 | 0.5 | 3 | 0.8385 | 0.9620 | 0.8960 | 421 | 0.5556 | 0.9184 | 0.6923 | 49 | 0.8845 | 0.9674 | 0.9241 | 491 | 0.2778 | 0.4545 | 0.3448 | 11 | 0.8074 | 0.9070 | 0.8543 | 0.9819 | | 0.0886 | 1.5 | 20000 | 0.0633 | 0.9259 | 0.9615 | 0.9434 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7568 | 0.8615 | 0.8058 | 65 | 0.5714 | 0.5714 | 0.5714 | 7 | 0.8980 | 0.8544 | 0.8756 | 206 | 0.9302 | 0.9302 | 0.9302 | 43 | 0.8470 | 0.8916 | 0.8688 | 1652 | 0.25 | 1.0 | 0.4 | 2 | 0.8373 | 0.9653 | 0.8968 | 144 | 0.9032 | 0.9825 | 0.9412 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.9044 | 0.9667 | 0.9346 | 421 | 0.7931 | 0.9388 | 0.8598 | 49 | 0.8342 | 0.9939 | 0.9071 | 491 | 0.1053 | 0.3636 | 0.1633 | 11 | 0.8471 | 0.9185 | 0.8814 | 0.9833 | | 0.0525 | 1.54 | 20500 | 0.0632 | 0.8197 | 0.9615 | 0.8850 | 52 | 0.7 | 1.0 | 0.8235 | 7 | 0.6742 | 0.9231 | 0.7792 | 65 | 0.4444 | 0.5714 | 0.5 | 7 | 0.7819 | 0.9223 | 0.8463 | 206 | 0.6721 | 0.9535 | 0.7885 | 43 | 0.8220 | 0.8723 | 0.8464 | 1652 | 0.0909 | 0.5 | 0.1538 | 2 | 0.7812 | 0.8681 | 0.8224 | 144 | 0.9180 | 0.9825 | 0.9492 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8838 | 0.9572 | 0.9190 | 421 | 0.5 | 0.9592 | 0.6573 | 49 | 0.8173 | 0.9837 | 0.8928 | 491 | 0.25 | 0.3636 | 0.2963 | 11 | 0.8092 | 0.9097 | 0.8565 | 0.9828 | | 0.0664 | 1.57 | 21000 | 0.0671 | 0.8197 | 0.9615 | 0.8850 | 52 | 0.5385 | 1.0 | 0.7000 | 7 | 0.6778 | 0.9385 | 0.7871 | 65 | 0.375 | 0.4286 | 0.4000 | 7 | 0.7932 | 0.9126 | 0.8488 | 206 | 0.72 | 0.8372 | 0.7742 | 43 | 0.7546 | 0.8935 | 0.8182 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7571 | 0.9306 | 0.8349 | 144 | 0.8777 | 0.9649 | 0.9192 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8867 | 0.9667 | 0.9250 | 421 | 0.8846 | 0.9388 | 0.9109 | 49 | 0.8199 | 0.9919 | 0.8977 | 491 | 0.3333 | 0.4545 | 0.3846 | 11 | 0.7829 | 0.9221 | 0.8468 | 0.9830 | | 0.0524 | 1.61 | 21500 | 0.0674 | 0.8305 | 0.9423 | 0.8829 | 52 | 0.5833 | 1.0 | 0.7368 | 7 | 0.7763 | 0.9077 | 0.8369 | 65 | 0.375 | 0.4286 | 0.4000 | 7 | 0.8889 | 0.8544 | 0.8713 | 206 | 0.7692 | 0.9302 | 0.8421 | 43 | 0.8235 | 0.8838 | 0.8526 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9041 | 0.9167 | 0.9103 | 144 | 0.9527 | 0.9415 | 0.9471 | 171 | 0.4286 | 1.0 | 0.6 | 3 | 0.9470 | 0.9762 | 0.9614 | 421 | 0.7857 | 0.8980 | 0.8381 | 49 | 0.8857 | 0.9939 | 0.9367 | 491 | 0.5 | 0.4545 | 0.4762 | 11 | 0.8555 | 0.9140 | 0.8838 | 0.9844 | | 0.0603 | 1.65 | 22000 | 0.0735 | 0.7812 | 0.9615 | 0.8621 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.9206 | 0.8923 | 0.9062 | 65 | 0.8 | 0.5714 | 0.6667 | 7 | 0.8062 | 0.8883 | 0.8453 | 206 | 0.6721 | 0.9535 | 0.7885 | 43 | 0.8402 | 0.8051 | 0.8223 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8036 | 0.9375 | 0.8654 | 144 | 0.9167 | 0.9649 | 0.9402 | 171 | 0.3 | 1.0 | 0.4615 | 3 | 0.9249 | 0.9359 | 0.9303 | 421 | 0.7077 | 0.9388 | 0.8070 | 49 | 0.9198 | 0.9817 | 0.9498 | 491 | 0.6667 | 0.5455 | 0.6 | 11 | 0.8558 | 0.8715 | 0.8636 | 0.9822 | | 0.0674 | 1.69 | 22500 | 0.0639 | 0.8103 | 0.9038 | 0.8545 | 52 | 0.2 | 0.2857 | 0.2353 | 7 | 0.7838 | 0.8923 | 0.8345 | 65 | 1.0 | 0.5714 | 0.7273 | 7 | 0.8852 | 0.8981 | 0.8916 | 206 | 0.8163 | 0.9302 | 0.8696 | 43 | 0.8393 | 0.8759 | 0.8572 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8618 | 0.9097 | 0.8851 | 144 | 0.8771 | 0.9181 | 0.8971 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.9400 | 0.9667 | 0.9532 | 421 | 0.9388 | 0.9388 | 0.9388 | 49 | 0.9030 | 0.9857 | 0.9426 | 491 | 0.3846 | 0.4545 | 0.4167 | 11 | 0.8633 | 0.9064 | 0.8844 | 0.9843 | | 0.0693 | 1.72 | 23000 | 0.0773 | 0.7143 | 0.9615 | 0.8197 | 52 | 0.8571 | 0.8571 | 0.8571 | 7 | 0.8356 | 0.9385 | 0.8841 | 65 | 0.625 | 0.7143 | 0.6667 | 7 | 0.8009 | 0.8786 | 0.8380 | 206 | 0.7119 | 0.9767 | 0.8235 | 43 | 0.7847 | 0.9001 | 0.8385 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7640 | 0.9444 | 0.8447 | 144 | 0.8836 | 0.9766 | 0.9278 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7143 | 0.9501 | 0.8155 | 421 | 0.3780 | 0.9796 | 0.5455 | 49 | 0.8134 | 0.9674 | 0.8837 | 491 | 0.5714 | 0.3636 | 0.4444 | 11 | 0.7688 | 0.9212 | 0.8381 | 0.9808 | | 0.0383 | 1.76 | 23500 | 0.0667 | 0.6410 | 0.9615 | 0.7692 | 52 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.7692 | 0.9231 | 0.8392 | 65 | 0.8333 | 0.7143 | 0.7692 | 7 | 0.8326 | 0.8689 | 0.8504 | 206 | 0.7636 | 0.9767 | 0.8571 | 43 | 0.8580 | 0.8777 | 0.8677 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8571 | 0.9167 | 0.8859 | 144 | 0.9405 | 0.9240 | 0.9322 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8901 | 0.9810 | 0.9333 | 421 | 0.88 | 0.8980 | 0.8889 | 49 | 0.9112 | 0.9817 | 0.9451 | 491 | 0.3636 | 0.3636 | 0.3636 | 11 | 0.8628 | 0.9097 | 0.8856 | 0.9845 | | 0.0496 | 1.8 | 24000 | 0.0712 | 0.8 | 0.9231 | 0.8571 | 52 | 0.8571 | 0.8571 | 0.8571 | 7 | 0.7262 | 0.9385 | 0.8188 | 65 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.8390 | 0.8350 | 0.8370 | 206 | 0.8889 | 0.9302 | 0.9091 | 43 | 0.8522 | 0.8692 | 0.8607 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8867 | 0.9236 | 0.9048 | 144 | 0.9598 | 0.9766 | 0.9681 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8963 | 0.9857 | 0.9389 | 421 | 0.7015 | 0.9592 | 0.8103 | 49 | 0.9412 | 0.9776 | 0.9590 | 491 | 0.25 | 0.5455 | 0.3429 | 11 | 0.8659 | 0.9073 | 0.8861 | 0.9848 | | 0.0465 | 1.84 | 24500 | 0.0612 | 0.6667 | 0.9615 | 0.7874 | 52 | 0.75 | 0.8571 | 0.8000 | 7 | 0.7625 | 0.9385 | 0.8414 | 65 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.8287 | 0.8689 | 0.8483 | 206 | 0.7407 | 0.9302 | 0.8247 | 43 | 0.8236 | 0.8904 | 0.8557 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.7919 | 0.9514 | 0.8644 | 144 | 0.9326 | 0.9708 | 0.9513 | 171 | 0.0513 | 0.6667 | 0.0952 | 3 | 0.9079 | 0.9834 | 0.9441 | 421 | 0.8958 | 0.8776 | 0.8866 | 49 | 0.9186 | 0.9878 | 0.9519 | 491 | 0.1765 | 0.2727 | 0.2143 | 11 | 0.8355 | 0.9212 | 0.8762 | 0.9853 | | 0.0446 | 1.87 | 25000 | 0.0662 | 0.6410 | 0.9615 | 0.7692 | 52 | 0.6364 | 1.0 | 0.7778 | 7 | 0.8732 | 0.9538 | 0.9118 | 65 | 0.8333 | 0.7143 | 0.7692 | 7 | 0.9378 | 0.8786 | 0.9073 | 206 | 0.8333 | 0.9302 | 0.8791 | 43 | 0.8362 | 0.8747 | 0.8550 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8447 | 0.9444 | 0.8918 | 144 | 0.9598 | 0.9766 | 0.9681 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.92 | 0.9834 | 0.9506 | 421 | 0.9070 | 0.7959 | 0.8478 | 49 | 0.9186 | 0.9878 | 0.9519 | 491 | 0.3636 | 0.3636 | 0.3636 | 11 | 0.8659 | 0.9131 | 0.8889 | 0.9851 | | 0.0496 | 1.91 | 25500 | 0.0653 | 0.7612 | 0.9808 | 0.8571 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8472 | 0.9385 | 0.8905 | 65 | 0.5 | 0.5714 | 0.5333 | 7 | 0.9158 | 0.8981 | 0.9069 | 206 | 0.8367 | 0.9535 | 0.8913 | 43 | 0.8487 | 0.8729 | 0.8606 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8889 | 0.9444 | 0.9158 | 144 | 0.9586 | 0.9474 | 0.9529 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.9077 | 0.9810 | 0.9429 | 421 | 0.7895 | 0.9184 | 0.8491 | 49 | 0.9120 | 0.9919 | 0.9502 | 491 | 0.5 | 0.2727 | 0.3529 | 11 | 0.8714 | 0.9137 | 0.8921 | 0.9854 | | 0.0689 | 1.95 | 26000 | 0.0689 | 0.8596 | 0.9423 | 0.8991 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.7887 | 0.8615 | 0.8235 | 65 | 0.5714 | 0.5714 | 0.5714 | 7 | 0.9064 | 0.8932 | 0.8998 | 206 | 0.8367 | 0.9535 | 0.8913 | 43 | 0.8217 | 0.9122 | 0.8646 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8232 | 0.9375 | 0.8766 | 144 | 0.9222 | 0.9708 | 0.9459 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.8827 | 0.9834 | 0.9303 | 421 | 0.9744 | 0.7755 | 0.8636 | 49 | 0.8574 | 0.9919 | 0.9197 | 491 | 0.4286 | 0.2727 | 0.3333 | 11 | 0.8441 | 0.9299 | 0.8849 | 0.9842 | | 0.0465 | 1.99 | 26500 | 0.1060 | 0.8136 | 0.9231 | 0.8649 | 52 | 0.5 | 1.0 | 0.6667 | 7 | 0.7778 | 0.8615 | 0.8175 | 65 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.8552 | 0.9175 | 0.8852 | 206 | 0.82 | 0.9535 | 0.8817 | 43 | 0.8698 | 0.8977 | 0.8835 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8904 | 0.9028 | 0.8966 | 144 | 0.9643 | 0.9474 | 0.9558 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.7361 | 0.9739 | 0.8384 | 421 | 0.25 | 0.0612 | 0.0984 | 49 | 0.8832 | 0.9552 | 0.9178 | 491 | 0.1 | 0.1818 | 0.1290 | 11 | 0.8384 | 0.9040 | 0.8700 | 0.9796 | | 0.0448 | 2.02 | 27000 | 0.0686 | 0.7385 | 0.9231 | 0.8205 | 52 | 0.625 | 0.7143 | 0.6667 | 7 | 0.8714 | 0.9385 | 0.9037 | 65 | 0.625 | 0.7143 | 0.6667 | 7 | 0.8545 | 0.9126 | 0.8826 | 206 | 0.6727 | 0.8605 | 0.7551 | 43 | 0.8778 | 0.8959 | 0.8868 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9116 | 0.9306 | 0.9210 | 144 | 0.9538 | 0.9649 | 0.9593 | 171 | 0.0 | 0.0 | 0.0 | 3 | 0.9157 | 0.9549 | 0.9349 | 421 | 0.875 | 0.8571 | 0.8660 | 49 | 0.8855 | 0.9919 | 0.9356 | 491 | 0.4 | 0.3636 | 0.3810 | 11 | 0.8790 | 0.9200 | 0.8990 | 0.9854 | | 0.0379 | 2.06 | 27500 | 0.0633 | 0.8421 | 0.9231 | 0.8807 | 52 | 0.2308 | 0.4286 | 0.3 | 7 | 0.8824 | 0.9231 | 0.9023 | 65 | 0.4545 | 0.7143 | 0.5556 | 7 | 0.8451 | 0.9272 | 0.8843 | 206 | 0.7037 | 0.8837 | 0.7835 | 43 | 0.8901 | 0.8674 | 0.8786 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8303 | 0.9514 | 0.8867 | 144 | 0.9706 | 0.9649 | 0.9677 | 171 | 0.3333 | 1.0 | 0.5 | 3 | 0.9300 | 0.9786 | 0.9537 | 421 | 0.9149 | 0.8776 | 0.8958 | 49 | 0.8385 | 0.9939 | 0.9096 | 491 | 0.2 | 0.3636 | 0.2581 | 11 | 0.8719 | 0.9116 | 0.8913 | 0.9859 | | 0.0352 | 2.1 | 28000 | 0.0653 | 0.8772 | 0.9615 | 0.9174 | 52 | 0.5556 | 0.7143 | 0.6250 | 7 | 0.8158 | 0.9538 | 0.8794 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.8733 | 0.9369 | 0.9040 | 206 | 0.8913 | 0.9535 | 0.9213 | 43 | 0.8272 | 0.9128 | 0.8679 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8824 | 0.9375 | 0.9091 | 144 | 0.9706 | 0.9649 | 0.9677 | 171 | 0.375 | 1.0 | 0.5455 | 3 | 0.8790 | 0.9834 | 0.9283 | 421 | 0.9 | 0.9184 | 0.9091 | 49 | 0.8692 | 0.9878 | 0.9247 | 491 | 0.25 | 0.4545 | 0.3226 | 11 | 0.8493 | 0.9377 | 0.8913 | 0.9844 | | 0.0328 | 2.14 | 28500 | 0.0599 | 0.8772 | 0.9615 | 0.9174 | 52 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.8806 | 0.9077 | 0.8939 | 65 | 0.5556 | 0.7143 | 0.6250 | 7 | 0.8804 | 0.8932 | 0.8867 | 206 | 0.8542 | 0.9535 | 0.9011 | 43 | 0.8680 | 0.9074 | 0.8872 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.85 | 0.9444 | 0.8947 | 144 | 0.9701 | 0.9474 | 0.9586 | 171 | 0.2727 | 1.0 | 0.4286 | 3 | 0.9452 | 0.9834 | 0.9639 | 421 | 0.6714 | 0.9592 | 0.7899 | 49 | 0.8937 | 0.9756 | 0.9328 | 491 | 0.5 | 0.4545 | 0.4762 | 11 | 0.8786 | 0.9293 | 0.9032 | 0.9867 | | 0.0473 | 2.17 | 29000 | 0.0595 | 0.7692 | 0.9615 | 0.8547 | 52 | 0.2222 | 0.2857 | 0.25 | 7 | 0.8493 | 0.9538 | 0.8986 | 65 | 0.6667 | 0.5714 | 0.6154 | 7 | 0.8889 | 0.9320 | 0.9100 | 206 | 0.8367 | 0.9535 | 0.8913 | 43 | 0.8341 | 0.9189 | 0.8744 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8466 | 0.9583 | 0.8990 | 144 | 0.9711 | 0.9825 | 0.9767 | 171 | 0.2143 | 1.0 | 0.3529 | 3 | 0.9234 | 0.9739 | 0.9480 | 421 | 0.75 | 0.9184 | 0.8257 | 49 | 0.8844 | 0.9817 | 0.9305 | 491 | 0.5556 | 0.4545 | 0.5 | 11 | 0.8557 | 0.9386 | 0.8953 | 0.9855 | | 0.0511 | 2.21 | 29500 | 0.0668 | 0.6849 | 0.9615 | 0.8000 | 52 | 0.1522 | 1.0 | 0.2642 | 7 | 0.7561 | 0.9538 | 0.8435 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.8761 | 0.9272 | 0.9009 | 206 | 0.8039 | 0.9535 | 0.8723 | 43 | 0.8154 | 0.9195 | 0.8643 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8491 | 0.9375 | 0.8911 | 144 | 0.9709 | 0.9766 | 0.9738 | 171 | 0.2727 | 1.0 | 0.4286 | 3 | 0.8939 | 0.9810 | 0.9354 | 421 | 0.5789 | 0.8980 | 0.704 | 49 | 0.8403 | 0.9857 | 0.9072 | 491 | 0.5 | 0.4545 | 0.4762 | 11 | 0.8214 | 0.9407 | 0.8770 | 0.9845 | | 0.0369 | 2.25 | 30000 | 0.0695 | 0.6579 | 0.9615 | 0.7812 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8732 | 0.9538 | 0.9118 | 65 | 0.625 | 0.7143 | 0.6667 | 7 | 0.9154 | 0.8932 | 0.9042 | 206 | 0.9535 | 0.9535 | 0.9535 | 43 | 0.8883 | 0.9001 | 0.8942 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9013 | 0.9514 | 0.9257 | 144 | 0.9527 | 0.9415 | 0.9471 | 171 | 0.375 | 1.0 | 0.5455 | 3 | 0.9126 | 0.9430 | 0.9276 | 421 | 0.5104 | 1.0 | 0.6759 | 49 | 0.9286 | 0.9796 | 0.9534 | 491 | 0.3571 | 0.4545 | 0.4 | 11 | 0.8837 | 0.9233 | 0.9030 | 0.9854 | | 0.041 | 2.29 | 30500 | 0.0623 | 0.9091 | 0.9615 | 0.9346 | 52 | 0.4375 | 1.0 | 0.6087 | 7 | 0.8378 | 0.9538 | 0.8921 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.9061 | 0.9369 | 0.9212 | 206 | 0.8723 | 0.9535 | 0.9111 | 43 | 0.8486 | 0.9225 | 0.8840 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8940 | 0.9375 | 0.9153 | 144 | 0.9708 | 0.9708 | 0.9708 | 171 | 1.0 | 1.0 | 1.0 | 3 | 0.9556 | 0.9715 | 0.9635 | 421 | 0.7705 | 0.9592 | 0.8545 | 49 | 0.9310 | 0.9898 | 0.9595 | 491 | 0.3333 | 0.4545 | 0.3846 | 11 | 0.8803 | 0.9428 | 0.9105 | 0.9853 | | 0.0385 | 2.32 | 31000 | 0.0632 | 0.9091 | 0.9615 | 0.9346 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.7848 | 0.9538 | 0.8611 | 65 | 0.5556 | 0.7143 | 0.6250 | 7 | 0.8915 | 0.9175 | 0.9043 | 206 | 0.9111 | 0.9535 | 0.9318 | 43 | 0.8486 | 0.9092 | 0.8778 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8961 | 0.9583 | 0.9262 | 144 | 0.9709 | 0.9766 | 0.9738 | 171 | 1.0 | 1.0 | 1.0 | 3 | 0.9180 | 0.9834 | 0.9495 | 421 | 0.8478 | 0.7959 | 0.8211 | 49 | 0.8959 | 0.9817 | 0.9368 | 491 | 0.3125 | 0.4545 | 0.3704 | 11 | 0.8724 | 0.9338 | 0.9021 | 0.9849 | | 0.0415 | 2.36 | 31500 | 0.0647 | 0.8929 | 0.9615 | 0.9259 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8493 | 0.9538 | 0.8986 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.9363 | 0.9272 | 0.9317 | 206 | 0.875 | 0.9767 | 0.9231 | 43 | 0.8679 | 0.9025 | 0.8849 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8726 | 0.9514 | 0.9103 | 144 | 0.9429 | 0.9649 | 0.9538 | 171 | 0.3 | 1.0 | 0.4615 | 3 | 0.9154 | 0.9762 | 0.9448 | 421 | 0.7719 | 0.8980 | 0.8302 | 49 | 0.9067 | 0.9898 | 0.9464 | 491 | 0.1579 | 0.5455 | 0.2449 | 11 | 0.8767 | 0.9329 | 0.9039 | 0.9847 | | 0.0454 | 2.4 | 32000 | 0.0606 | 0.9091 | 0.9615 | 0.9346 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8971 | 0.9385 | 0.9173 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9139 | 0.9272 | 0.9205 | 206 | 0.8542 | 0.9535 | 0.9011 | 43 | 0.8652 | 0.9056 | 0.8849 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9122 | 0.9375 | 0.9247 | 144 | 0.9483 | 0.9649 | 0.9565 | 171 | 0.3 | 1.0 | 0.4615 | 3 | 0.9321 | 0.9786 | 0.9548 | 421 | 0.8136 | 0.9796 | 0.8889 | 49 | 0.9455 | 0.9898 | 0.9672 | 491 | 0.2143 | 0.5455 | 0.3077 | 11 | 0.888 | 0.9350 | 0.9109 | 0.9869 | | 0.0334 | 2.44 | 32500 | 0.0610 | 0.8929 | 0.9615 | 0.9259 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8714 | 0.9385 | 0.9037 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.8977 | 0.9369 | 0.9169 | 206 | 0.9130 | 0.9767 | 0.9438 | 43 | 0.8463 | 0.9068 | 0.8755 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9007 | 0.9444 | 0.9220 | 144 | 0.9532 | 0.9532 | 0.9532 | 171 | 0.375 | 1.0 | 0.5455 | 3 | 0.9536 | 0.9762 | 0.9648 | 421 | 0.8545 | 0.9592 | 0.9038 | 49 | 0.9419 | 0.9898 | 0.9652 | 491 | 0.25 | 0.5455 | 0.3429 | 11 | 0.8813 | 0.9356 | 0.9076 | 0.9867 | | 0.0453 | 2.47 | 33000 | 0.0610 | 0.8929 | 0.9615 | 0.9259 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8732 | 0.9538 | 0.9118 | 65 | 0.5556 | 0.7143 | 0.6250 | 7 | 0.9057 | 0.9320 | 0.9187 | 206 | 0.9302 | 0.9302 | 0.9302 | 43 | 0.8668 | 0.9098 | 0.8878 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8882 | 0.9375 | 0.9122 | 144 | 0.9588 | 0.9532 | 0.9560 | 171 | 0.375 | 1.0 | 0.5455 | 3 | 0.9303 | 0.9834 | 0.9561 | 421 | 1.0 | 0.8980 | 0.9462 | 49 | 0.9455 | 0.9898 | 0.9672 | 491 | 0.7143 | 0.4545 | 0.5556 | 11 | 0.8952 | 0.9353 | 0.9148 | 0.9875 | | 0.0225 | 2.51 | 33500 | 0.0607 | 0.9259 | 0.9615 | 0.9434 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.7949 | 0.9538 | 0.8671 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.8733 | 0.9369 | 0.9040 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.8641 | 0.9007 | 0.8820 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8616 | 0.9514 | 0.9043 | 144 | 0.9412 | 0.9357 | 0.9384 | 171 | 0.3333 | 1.0 | 0.5 | 3 | 0.9281 | 0.9810 | 0.9538 | 421 | 0.8868 | 0.9592 | 0.9216 | 49 | 0.9365 | 0.9919 | 0.9634 | 491 | 0.4545 | 0.4545 | 0.4545 | 11 | 0.8844 | 0.9323 | 0.9077 | 0.9876 | | 0.0276 | 2.55 | 34000 | 0.0603 | 0.8909 | 0.9423 | 0.9159 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.775 | 0.9538 | 0.8552 | 65 | 0.6667 | 0.8571 | 0.75 | 7 | 0.8894 | 0.9369 | 0.9125 | 206 | 0.9111 | 0.9535 | 0.9318 | 43 | 0.8661 | 0.9201 | 0.8923 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8688 | 0.9653 | 0.9145 | 144 | 0.9649 | 0.9649 | 0.9649 | 171 | 0.1667 | 1.0 | 0.2857 | 3 | 0.9649 | 0.9786 | 0.9717 | 421 | 0.9020 | 0.9388 | 0.92 | 49 | 0.9222 | 0.9898 | 0.9548 | 491 | 0.4545 | 0.4545 | 0.4545 | 11 | 0.8868 | 0.9428 | 0.9140 | 0.9877 | | 0.0291 | 2.59 | 34500 | 0.0605 | 0.9074 | 0.9423 | 0.9245 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8378 | 0.9538 | 0.8921 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.9091 | 0.9223 | 0.9157 | 206 | 0.9524 | 0.9302 | 0.9412 | 43 | 0.8707 | 0.9213 | 0.8953 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8947 | 0.9444 | 0.9189 | 144 | 0.9758 | 0.9415 | 0.9583 | 171 | 0.25 | 1.0 | 0.4 | 3 | 0.9448 | 0.9762 | 0.9603 | 421 | 0.9787 | 0.9388 | 0.9583 | 49 | 0.8952 | 0.9919 | 0.9411 | 491 | 0.2632 | 0.4545 | 0.3333 | 11 | 0.8885 | 0.9401 | 0.9136 | 0.9881 | | 0.0264 | 2.62 | 35000 | 0.0616 | 0.9074 | 0.9423 | 0.9245 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8493 | 0.9538 | 0.8986 | 65 | 0.75 | 0.8571 | 0.8000 | 7 | 0.9019 | 0.9369 | 0.9190 | 206 | 0.8913 | 0.9535 | 0.9213 | 43 | 0.8694 | 0.9310 | 0.8992 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8782 | 0.9514 | 0.9133 | 144 | 0.9422 | 0.9532 | 0.9477 | 171 | 0.25 | 1.0 | 0.4 | 3 | 0.9258 | 0.9786 | 0.9515 | 421 | 0.8679 | 0.9388 | 0.9020 | 49 | 0.9272 | 0.9857 | 0.9556 | 491 | 0.1852 | 0.4545 | 0.2632 | 11 | 0.8837 | 0.9465 | 0.9140 | 0.9875 | | 0.0343 | 2.66 | 35500 | 0.0595 | 0.7083 | 0.9808 | 0.8226 | 52 | 0.6667 | 0.8571 | 0.75 | 7 | 0.7949 | 0.9538 | 0.8671 | 65 | 0.7143 | 0.7143 | 0.7143 | 7 | 0.8858 | 0.9417 | 0.9129 | 206 | 0.9091 | 0.9302 | 0.9195 | 43 | 0.8556 | 0.9110 | 0.8824 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8616 | 0.9514 | 0.9043 | 144 | 0.9270 | 0.9649 | 0.9456 | 171 | 1.0 | 1.0 | 1.0 | 3 | 0.9388 | 0.9834 | 0.9606 | 421 | 0.8868 | 0.9592 | 0.9216 | 49 | 0.8919 | 0.9919 | 0.9392 | 491 | 0.625 | 0.4545 | 0.5263 | 11 | 0.8728 | 0.9389 | 0.9046 | 0.9871 | | 0.0284 | 2.7 | 36000 | 0.0569 | 0.9074 | 0.9423 | 0.9245 | 52 | 0.5385 | 1.0 | 0.7000 | 7 | 0.8052 | 0.9538 | 0.8732 | 65 | 0.5556 | 0.7143 | 0.6250 | 7 | 0.9143 | 0.9320 | 0.9231 | 206 | 0.9070 | 0.9070 | 0.9070 | 43 | 0.8724 | 0.9189 | 0.8950 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9145 | 0.9653 | 0.9392 | 144 | 0.9540 | 0.9708 | 0.9623 | 171 | 0.15 | 1.0 | 0.2609 | 3 | 0.9605 | 0.9810 | 0.9706 | 421 | 0.8364 | 0.9388 | 0.8846 | 49 | 0.8907 | 0.9959 | 0.9404 | 491 | 0.625 | 0.4545 | 0.5263 | 11 | 0.8865 | 0.9425 | 0.9137 | 0.9878 | | 0.0377 | 2.74 | 36500 | 0.0554 | 0.7083 | 0.9808 | 0.8226 | 52 | 0.5833 | 1.0 | 0.7368 | 7 | 0.7654 | 0.9538 | 0.8493 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.8981 | 0.9417 | 0.9194 | 206 | 0.9091 | 0.9302 | 0.9195 | 43 | 0.8700 | 0.9237 | 0.8961 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9205 | 0.9653 | 0.9424 | 144 | 0.9540 | 0.9708 | 0.9623 | 171 | 0.25 | 1.0 | 0.4 | 3 | 0.9063 | 0.9881 | 0.9455 | 421 | 0.9020 | 0.9388 | 0.92 | 49 | 0.8825 | 0.9939 | 0.9349 | 491 | 0.4545 | 0.4545 | 0.4545 | 11 | 0.8755 | 0.9477 | 0.9101 | 0.9883 | | 0.0316 | 2.77 | 37000 | 0.0562 | 0.6711 | 0.9808 | 0.7969 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8052 | 0.9538 | 0.8732 | 65 | 0.6667 | 0.8571 | 0.75 | 7 | 0.9143 | 0.9320 | 0.9231 | 206 | 0.9524 | 0.9302 | 0.9412 | 43 | 0.8721 | 0.9243 | 0.8974 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.9026 | 0.9653 | 0.9329 | 144 | 0.9653 | 0.9766 | 0.9709 | 171 | 0.2143 | 1.0 | 0.3529 | 3 | 0.9202 | 0.9857 | 0.9518 | 421 | 0.8070 | 0.9388 | 0.8679 | 49 | 0.8954 | 0.9939 | 0.9421 | 491 | 0.5 | 0.4545 | 0.4762 | 11 | 0.8801 | 0.9471 | 0.9123 | 0.9885 | | 0.0454 | 2.81 | 37500 | 0.0555 | 0.8333 | 0.9615 | 0.8929 | 52 | 0.5 | 1.0 | 0.6667 | 7 | 0.8052 | 0.9538 | 0.8732 | 65 | 0.6667 | 0.8571 | 0.75 | 7 | 0.9023 | 0.9417 | 0.9216 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.8782 | 0.9249 | 0.9009 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8642 | 0.9722 | 0.9150 | 144 | 0.9337 | 0.9883 | 0.9602 | 171 | 0.2143 | 1.0 | 0.3529 | 3 | 0.9498 | 0.9881 | 0.9686 | 421 | 0.94 | 0.9592 | 0.9495 | 49 | 0.8954 | 0.9939 | 0.9421 | 491 | 0.3125 | 0.4545 | 0.3704 | 11 | 0.8845 | 0.9492 | 0.9157 | 0.9881 | | 0.0445 | 2.85 | 38000 | 0.0521 | 0.8889 | 0.9231 | 0.9057 | 52 | 0.5385 | 1.0 | 0.7000 | 7 | 0.8493 | 0.9538 | 0.8986 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9019 | 0.9369 | 0.9190 | 206 | 0.8511 | 0.9302 | 0.8889 | 43 | 0.8769 | 0.9183 | 0.8971 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8797 | 0.9653 | 0.9205 | 144 | 0.9767 | 0.9825 | 0.9796 | 171 | 0.25 | 1.0 | 0.4 | 3 | 0.9243 | 0.9857 | 0.9540 | 421 | 0.8103 | 0.9592 | 0.8785 | 49 | 0.8954 | 0.9939 | 0.9421 | 491 | 0.3125 | 0.4545 | 0.3704 | 11 | 0.8845 | 0.9443 | 0.9134 | 0.9884 | | 0.0379 | 2.89 | 38500 | 0.0524 | 0.8727 | 0.9231 | 0.8972 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8611 | 0.9538 | 0.9051 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9147 | 0.9369 | 0.9257 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.8903 | 0.9183 | 0.9041 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8742 | 0.9653 | 0.9175 | 144 | 0.9711 | 0.9825 | 0.9767 | 171 | 0.2308 | 1.0 | 0.375 | 3 | 0.9101 | 0.9857 | 0.9464 | 421 | 0.8214 | 0.9388 | 0.8762 | 49 | 0.9067 | 0.9898 | 0.9464 | 491 | 0.3571 | 0.4545 | 0.4 | 11 | 0.8932 | 0.9434 | 0.9176 | 0.9885 | | 0.0372 | 2.92 | 39000 | 0.0514 | 0.8889 | 0.9231 | 0.9057 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8378 | 0.9538 | 0.8921 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9279 | 0.9369 | 0.9324 | 206 | 0.8333 | 0.9302 | 0.8791 | 43 | 0.8857 | 0.9195 | 0.9023 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8910 | 0.9653 | 0.9267 | 144 | 0.9767 | 0.9825 | 0.9796 | 171 | 0.3333 | 1.0 | 0.5 | 3 | 0.9141 | 0.9857 | 0.9486 | 421 | 0.8545 | 0.9592 | 0.9038 | 49 | 0.9120 | 0.9919 | 0.9502 | 491 | 0.3333 | 0.4545 | 0.3846 | 11 | 0.8946 | 0.9446 | 0.9189 | 0.9886 | | 0.0263 | 2.96 | 39500 | 0.0515 | 0.8889 | 0.9231 | 0.9057 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8378 | 0.9538 | 0.8921 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9190 | 0.9369 | 0.9279 | 206 | 0.8511 | 0.9302 | 0.8889 | 43 | 0.8868 | 0.9201 | 0.9031 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8910 | 0.9653 | 0.9267 | 144 | 0.9825 | 0.9825 | 0.9825 | 171 | 0.3 | 1.0 | 0.4615 | 3 | 0.9326 | 0.9857 | 0.9584 | 421 | 0.8868 | 0.9592 | 0.9216 | 49 | 0.9137 | 0.9919 | 0.9512 | 491 | 0.3571 | 0.4545 | 0.4 | 11 | 0.8982 | 0.9449 | 0.9210 | 0.9885 | | 0.0242 | 3.0 | 40000 | 0.0518 | 0.8889 | 0.9231 | 0.9057 | 52 | 0.875 | 1.0 | 0.9333 | 7 | 0.8267 | 0.9538 | 0.8857 | 65 | 0.875 | 1.0 | 0.9333 | 7 | 0.9190 | 0.9369 | 0.9279 | 206 | 0.8696 | 0.9302 | 0.8989 | 43 | 0.8827 | 0.9201 | 0.9010 | 1652 | 0.0 | 0.0 | 0.0 | 2 | 0.8688 | 0.9653 | 0.9145 | 144 | 0.9825 | 0.9825 | 0.9825 | 171 | 0.2727 | 1.0 | 0.4286 | 3 | 0.9220 | 0.9834 | 0.9517 | 421 | 0.9038 | 0.9592 | 0.9307 | 49 | 0.9086 | 0.9919 | 0.9484 | 491 | 0.3846 | 0.4545 | 0.4167 | 11 | 0.8933 | 0.9446 | 0.9183 | 0.9885 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pig4431/amazonPolarity_fewshot
pig4431
2022-11-07T07:23:26Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-07T07:23:13Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 160 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 160, "warmup_steps": 16, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
pig4431/Sentiment140_ELECTRA_5E
pig4431
2022-11-07T07:08:03Z
7
1
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:06:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_ELECTRA_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_ELECTRA_5E This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.5410 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6896 | 0.08 | 50 | 0.6605 | 0.7133 | | 0.6664 | 0.16 | 100 | 0.6054 | 0.7133 | | 0.5915 | 0.24 | 150 | 0.4777 | 0.8333 | | 0.5053 | 0.32 | 200 | 0.4735 | 0.7733 | | 0.4946 | 0.4 | 250 | 0.3847 | 0.8267 | | 0.4578 | 0.48 | 300 | 0.4025 | 0.8067 | | 0.4724 | 0.56 | 350 | 0.3642 | 0.8333 | | 0.4309 | 0.64 | 400 | 0.3762 | 0.86 | | 0.4818 | 0.72 | 450 | 0.3829 | 0.84 | | 0.416 | 0.8 | 500 | 0.3599 | 0.8467 | | 0.4201 | 0.88 | 550 | 0.3469 | 0.8533 | | 0.3664 | 0.96 | 600 | 0.3462 | 0.8467 | | 0.4289 | 1.04 | 650 | 0.3470 | 0.86 | | 0.3859 | 1.12 | 700 | 0.3440 | 0.8533 | | 0.3599 | 1.2 | 750 | 0.3475 | 0.8533 | | 0.3287 | 1.28 | 800 | 0.3524 | 0.8467 | | 0.3331 | 1.36 | 850 | 0.3475 | 0.8733 | | 0.3236 | 1.44 | 900 | 0.3657 | 0.8467 | | 0.3502 | 1.52 | 950 | 0.3525 | 0.84 | | 0.3702 | 1.6 | 1000 | 0.3655 | 0.8333 | | 0.3323 | 1.68 | 1050 | 0.3405 | 0.84 | | 0.3452 | 1.76 | 1100 | 0.3376 | 0.8533 | | 0.3742 | 1.84 | 1150 | 0.3481 | 0.8533 | | 0.3145 | 1.92 | 1200 | 0.3472 | 0.86 | | 0.3657 | 2.0 | 1250 | 0.3302 | 0.8733 | | 0.2601 | 2.08 | 1300 | 0.3612 | 0.86 | | 0.2954 | 2.16 | 1350 | 0.3640 | 0.8533 | | 0.2888 | 2.24 | 1400 | 0.3670 | 0.8467 | | 0.2572 | 2.32 | 1450 | 0.4118 | 0.84 | | 0.2955 | 2.4 | 1500 | 0.3811 | 0.86 | | 0.2431 | 2.48 | 1550 | 0.4221 | 0.84 | | 0.318 | 2.56 | 1600 | 0.3844 | 0.8467 | | 0.2615 | 2.64 | 1650 | 0.4109 | 0.8333 | | 0.2389 | 2.72 | 1700 | 0.4420 | 0.8467 | | 0.2983 | 2.8 | 1750 | 0.4203 | 0.8467 | | 0.2828 | 2.88 | 1800 | 0.3629 | 0.8733 | | 0.2897 | 2.96 | 1850 | 0.3916 | 0.8733 | | 0.2239 | 3.04 | 1900 | 0.4143 | 0.86 | | 0.2093 | 3.12 | 1950 | 0.4521 | 0.84 | | 0.2438 | 3.2 | 2000 | 0.4271 | 0.8467 | | 0.2282 | 3.28 | 2050 | 0.4548 | 0.8333 | | 0.1918 | 3.36 | 2100 | 0.4533 | 0.86 | | 0.1698 | 3.44 | 2150 | 0.5177 | 0.84 | | 0.2765 | 3.52 | 2200 | 0.4884 | 0.84 | | 0.2282 | 3.6 | 2250 | 0.4697 | 0.8533 | | 0.239 | 3.68 | 2300 | 0.4766 | 0.8533 | | 0.2219 | 3.76 | 2350 | 0.4628 | 0.8533 | | 0.2375 | 3.84 | 2400 | 0.4704 | 0.8533 | | 0.1883 | 3.92 | 2450 | 0.4744 | 0.84 | | 0.2049 | 4.0 | 2500 | 0.4977 | 0.84 | | 0.1958 | 4.08 | 2550 | 0.4906 | 0.84 | | 0.1656 | 4.16 | 2600 | 0.5219 | 0.8333 | | 0.1543 | 4.24 | 2650 | 0.5379 | 0.8333 | | 0.2082 | 4.32 | 2700 | 0.5107 | 0.84 | | 0.1724 | 4.4 | 2750 | 0.5208 | 0.84 | | 0.1778 | 4.48 | 2800 | 0.5238 | 0.84 | | 0.1914 | 4.56 | 2850 | 0.5325 | 0.84 | | 0.2436 | 4.64 | 2900 | 0.5279 | 0.84 | | 0.1662 | 4.72 | 2950 | 0.5295 | 0.84 | | 0.1288 | 4.8 | 3000 | 0.5392 | 0.84 | | 0.2087 | 4.88 | 3050 | 0.5409 | 0.84 | | 0.1612 | 4.96 | 3100 | 0.5410 | 0.84 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.3.2 - Tokenizers 0.13.1
pig4431/IMDB_fewshot
pig4431
2022-11-07T06:51:38Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-06T21:07:06Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 160 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 160, "warmup_steps": 16, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
pig4431/Sentiment140_XLNET_5E
pig4431
2022-11-07T06:22:19Z
89
0
transformers
[ "transformers", "pytorch", "xlnet", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T06:20:23Z
--- license: mit tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_XLNET_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_XLNET_5E This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.3797 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6687 | 0.08 | 50 | 0.5194 | 0.76 | | 0.5754 | 0.16 | 100 | 0.4500 | 0.7867 | | 0.5338 | 0.24 | 150 | 0.3725 | 0.8333 | | 0.5065 | 0.32 | 200 | 0.4093 | 0.8133 | | 0.4552 | 0.4 | 250 | 0.3910 | 0.8267 | | 0.5352 | 0.48 | 300 | 0.3888 | 0.82 | | 0.415 | 0.56 | 350 | 0.3887 | 0.8267 | | 0.4716 | 0.64 | 400 | 0.3888 | 0.84 | | 0.4565 | 0.72 | 450 | 0.3619 | 0.84 | | 0.4447 | 0.8 | 500 | 0.3758 | 0.8333 | | 0.4407 | 0.88 | 550 | 0.3664 | 0.8133 | | 0.46 | 0.96 | 600 | 0.3797 | 0.84 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.3.2 - Tokenizers 0.13.1
huggingtweets/mhhmmad_
huggingtweets
2022-11-07T04:41:10Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T04:41:03Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1355122703036936198/SDlJIKsr_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mohammad Hassan</div> <div style="text-align: center; font-size: 14px;">@mhhmmad_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mohammad Hassan. | Data | Mohammad Hassan | | --- | --- | | Tweets downloaded | 3017 | | Retweets | 679 | | Short tweets | 201 | | Tweets kept | 2137 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wifnwvu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mhhmmad_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23y6lfe2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23y6lfe2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mhhmmad_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tkubotake/xlm-roberta-base-finetuned-panx-fr
tkubotake
2022-11-07T04:39:39Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T02:57:03Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: train args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8635672020287405 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4157 - F1: 0.8636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0847 | 1.0 | 191 | 0.4066 | 0.8524 | | 0.0574 | 2.0 | 382 | 0.4025 | 0.8570 | | 0.0333 | 3.0 | 573 | 0.4157 | 0.8636 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Marve271/BartConditionalGeneration-bart-large-finetuned-insult
Marve271
2022-11-07T04:05:25Z
182
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-06T19:15:18Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: BartConditionalGeneration-bart-large-finetuned-insult results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BartConditionalGeneration-bart-large-finetuned-insult This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.6217 | 1.0 | 568 | 4.5864 | | 4.7444 | 2.0 | 1136 | nan | | 4.2308 | 3.0 | 1704 | 3.7590 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Formzu/bart-large-japanese
Formzu
2022-11-07T03:06:32Z
6
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "ja", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-31T06:53:19Z
--- language: - ja license: mit tags: - bart - pytorch datasets: - wikipedia --- # bart-large-japanese This model is converted from the original [Japanese BART Pretrained model](https://nlp.ist.i.kyoto-u.ac.jp/?BART%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB) released by Kyoto University. Both the encoder and decoder outputs are identical to the original Fairseq model. ### How to use the model The input text should be tokenized by [BartJapaneseTokenizer](https://huggingface.co/Formzu/bart-large-japanese/blob/main/tokenization_bart_japanese.py). Tokenizer requirements: * [Juman++](https://github.com/ku-nlp/jumanpp) * [zenhan](https://pypi.org/project/zenhan/) * [pyknp](https://pypi.org/project/pyknp/) * [sentencepiece](https://pypi.org/project/sentencepiece/) #### Simple FillMaskPipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_name = "Formzu/bart-large-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer) out = fill_mask(masked_text) print(out) # [{'score': 0.03228279948234558, 'token': 2566, 'token_str': 'いい', 'sequence': '天気 が いい から 散歩 し ましょう 。'}, # {'score': 0.023878807201981544, 'token': 27365, 'token_str': '晴れ', 'sequence': '天気 が 晴れ から 散歩 し ましょう 。'}, # {'score': 0.020059829577803612, 'token': 267, 'token_str': '南', 'sequence': '天気 が 南 から 散歩 し ましょう 。'}, # {'score': 0.013921134173870087, 'token': 17, 'token_str': 'な', 'sequence': '天気 が な から 散歩 し ましょう 。'}, # {'score': 0.013069136068224907, 'token': 1718, 'token_str': 'よく', 'sequence': '天気 が よく から 散歩 し ましょう 。'}] ``` #### Text Generation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/bart-large-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" inp = tokenizer(masked_text, return_tensors='pt').to(device) out = model.generate(**inp, num_beams=1, min_length=0, max_length=20, early_stopping=True, no_repeat_ngram_size=2) res = "".join(tokenizer.decode(out.squeeze(0).tolist(), skip_special_tokens=True).split(" ")) print(res) # 天気がいいから散歩しましょう。天気のいいへやから、ここから ``` ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Tokenizers 0.12.1
Formzu/bart-base-japanese
Formzu
2022-11-07T02:13:39Z
7
2
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "ja", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-31T06:52:38Z
--- language: - ja license: mit tags: - bart - pytorch datasets: - wikipedia --- # bart-base-japanese This model is converted from the original [Japanese BART Pretrained model](https://nlp.ist.i.kyoto-u.ac.jp/?BART%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB) released by Kyoto University. Both the encoder and decoder outputs are identical to the original Fairseq model. ### How to use the model The input text should be tokenized by [BartJapaneseTokenizer](https://huggingface.co/Formzu/bart-base-japanese/blob/main/tokenization_bart_japanese.py). Tokenizer requirements: * [Juman++](https://github.com/ku-nlp/jumanpp) * [zenhan](https://pypi.org/project/zenhan/) * [pyknp](https://pypi.org/project/pyknp/) * [sentencepiece](https://pypi.org/project/sentencepiece/) #### Simple FillMaskPipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer) out = fill_mask(masked_text) print(out) # [{'score': 0.19255658984184265, 'token': 1718, 'token_str': 'よく', 'sequence': '天気 が よく から 散歩 し ましょう 。'}, # {'score': 0.14426815509796143, 'token': 5478, 'token_str': '良く', 'sequence': '天気 が 良く から 散歩 し ましょう 。'}, # {'score': 0.05554169788956642, 'token': 6561, 'token_str': '悪い', 'sequence': '天気 が 悪い から 散歩 し ましょう 。'}, # {'score': 0.05524599179625511, 'token': 3553, 'token_str': '良い', 'sequence': '天気 が 良い から 散歩 し ましょう 。'}, # {'score': 0.03720080852508545, 'token': 1370, 'token_str': '良', 'sequence': '天気 が 良 から 散歩 し ましょう 。'}] ``` #### Text Generation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" inp = tokenizer(masked_text, return_tensors='pt').to(device) out = model.generate(**inp, num_beams=1, min_length=0, max_length=20, early_stopping=True, no_repeat_ngram_size=2) res = "".join(tokenizer.decode(out.squeeze(0).tolist(), skip_special_tokens=True).split(" ")) print(res) # 天気がよくなってから散歩しましょう。天気のよく合っているところにいる ``` ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Tokenizers 0.12.1
kaejo98/bart-base_question_generation
kaejo98
2022-11-06T23:27:56Z
8
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-01T22:36:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-base_question_generation results: [] --- # BART-base Question Generation This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on different questions and answering dataset. It was trained to generation question using two different approaches, <b> Casual-Generation </b> and <b> Context-based-Generation </b>. ## Model description The model takes context as an input sequence, and will generate a full question sentence as an output sequence. There are two ways the model can be queried produce the questions: - <b> Casual-Generation </b>: where the model is tasked to generate questions answerable by a given passage. The input should be follow the structure or format: '\<generate_questions\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_questions\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. - <b> Context-based-Generation </b>: given a section of a passage (context), the model is tasked to generate questions from the passage about the selected section or context. The input should be follow the structure or format: \<generate_context_questions\> \<section\> put your context here \</section\> paragraph: put your passage text here'. <br/> Example: <br/> \<generate_context_questions\> \<section\> Mohorovičić discontinuity \</section\> paragraph: The lithosphere is broken into tectonic plates whose motion allows heat to escape from the interior of the Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method. ## limitations The model was trained on only a limited amount of data hence questions might be poor quality. In addition the questions generated have style similar to that of the training data. ## Training and evaluation data The dataset used to train the model comprises the training datasets from: - Reasoning Over Paragraph Effects in Situations (ROPES): https://allenai.org/data/ropes - SQUAD: - DROP (Discrete Reasoning Over Paragraphs): https://allenai.org/data/drop - SciQ After preprocessing the data from the above listed datasets, we had 408372 examples for training the model and 25k for development and 18k for testing. ## Training procedure The model is trained (finetuned) for 5 epochs with the hyperparameters listed below: ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 5 At the end of 5 epochs, the Evaluation loss was: 1.64 and the training loss was: 0.9671. ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
holacaracola/Arlenas_room
holacaracola
2022-11-06T23:12:26Z
0
0
null
[ "region:us" ]
null
2022-11-06T23:12:07Z
Room with a big wardrobe which contains lots of sport suits, make up and complements Room with a big wardrobe which contains lots of sport suits, make up and complements
halflings/house_price_prediction_ser2
halflings
2022-11-06T21:40:14Z
0
0
mlconsole
[ "mlconsole", "tabular-regression", "dataset:house_price_prediction", "license:unknown", "model-index", "region:us" ]
tabular-regression
2022-11-06T21:40:10Z
--- license: unknown inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - house_price_prediction model-index: - name: house_price_prediction_ser2 results: - task: type: tabular-regression name: tabular-regression dataset: type: house_price_prediction name: house_price_prediction metrics: - type: mae name: Mean absolute error value: 5.011783599853516 - type: loss name: Model loss value: 43.01755905151367 --- # regression model trained on "house_price_prediction" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/house_price_prediction_ser2) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
halflings/house_price_prediction_dev
halflings
2022-11-06T21:34:02Z
0
0
mlconsole
[ "mlconsole", "tabular-regression", "dataset:house_price_prediction", "license:unknown", "model-index", "region:us" ]
tabular-regression
2022-11-06T21:33:58Z
--- license: unknown inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - house_price_prediction model-index: - name: house_price_prediction_dev results: - task: type: tabular-regression name: tabular-regression dataset: type: house_price_prediction name: house_price_prediction metrics: - type: mae name: Mean absolute error value: 7.064809322357178 - type: loss name: Model loss value: 98.9962387084961 --- # regression model trained on "house_price_prediction" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/house_price_prediction_dev) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
pig4431/Sentiment140_roBERTa_5E
pig4431
2022-11-06T21:17:53Z
17
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T21:15:53Z
--- license: mit tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_roBERTa_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.8933333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_roBERTa_5E This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.4796 - Accuracy: 0.8933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.699 | 0.08 | 50 | 0.6734 | 0.5467 | | 0.6099 | 0.16 | 100 | 0.4322 | 0.8 | | 0.4906 | 0.24 | 150 | 0.3861 | 0.84 | | 0.4652 | 0.32 | 200 | 0.4288 | 0.7933 | | 0.4874 | 0.4 | 250 | 0.3872 | 0.84 | | 0.4735 | 0.48 | 300 | 0.3401 | 0.8667 | | 0.3909 | 0.56 | 350 | 0.3484 | 0.84 | | 0.4277 | 0.64 | 400 | 0.3207 | 0.88 | | 0.3894 | 0.72 | 450 | 0.3310 | 0.8733 | | 0.4523 | 0.8 | 500 | 0.3389 | 0.8667 | | 0.4087 | 0.88 | 550 | 0.3515 | 0.8467 | | 0.3973 | 0.96 | 600 | 0.3513 | 0.8467 | | 0.4016 | 1.04 | 650 | 0.3501 | 0.8667 | | 0.3613 | 1.12 | 700 | 0.3327 | 0.8667 | | 0.343 | 1.2 | 750 | 0.3518 | 0.86 | | 0.314 | 1.28 | 800 | 0.3555 | 0.88 | | 0.3407 | 1.36 | 850 | 0.3849 | 0.86 | | 0.2944 | 1.44 | 900 | 0.3576 | 0.8667 | | 0.3267 | 1.52 | 950 | 0.3461 | 0.8733 | | 0.3251 | 1.6 | 1000 | 0.3411 | 0.8667 | | 0.321 | 1.68 | 1050 | 0.3371 | 0.88 | | 0.3057 | 1.76 | 1100 | 0.3322 | 0.88 | | 0.3335 | 1.84 | 1150 | 0.3106 | 0.8667 | | 0.3363 | 1.92 | 1200 | 0.3158 | 0.8933 | | 0.2972 | 2.0 | 1250 | 0.3122 | 0.88 | | 0.2453 | 2.08 | 1300 | 0.3327 | 0.8867 | | 0.2467 | 2.16 | 1350 | 0.3767 | 0.8667 | | 0.273 | 2.24 | 1400 | 0.3549 | 0.8667 | | 0.2672 | 2.32 | 1450 | 0.3470 | 0.88 | | 0.2352 | 2.4 | 1500 | 0.4092 | 0.8667 | | 0.2763 | 2.48 | 1550 | 0.3472 | 0.9 | | 0.2858 | 2.56 | 1600 | 0.3440 | 0.9 | | 0.2206 | 2.64 | 1650 | 0.3770 | 0.88 | | 0.2928 | 2.72 | 1700 | 0.3280 | 0.8867 | | 0.2478 | 2.8 | 1750 | 0.3426 | 0.8867 | | 0.2362 | 2.88 | 1800 | 0.3578 | 0.8933 | | 0.2107 | 2.96 | 1850 | 0.3986 | 0.8933 | | 0.2191 | 3.04 | 1900 | 0.3819 | 0.8933 | | 0.2267 | 3.12 | 1950 | 0.4047 | 0.8867 | | 0.2076 | 3.2 | 2000 | 0.4303 | 0.8867 | | 0.1868 | 3.28 | 2050 | 0.4385 | 0.8933 | | 0.2239 | 3.36 | 2100 | 0.4175 | 0.8933 | | 0.2082 | 3.44 | 2150 | 0.4142 | 0.8933 | | 0.2423 | 3.52 | 2200 | 0.4002 | 0.8867 | | 0.1878 | 3.6 | 2250 | 0.4662 | 0.88 | | 0.1892 | 3.68 | 2300 | 0.4783 | 0.88 | | 0.2259 | 3.76 | 2350 | 0.4487 | 0.88 | | 0.1859 | 3.84 | 2400 | 0.4456 | 0.8933 | | 0.2042 | 3.92 | 2450 | 0.4468 | 0.8933 | | 0.2096 | 4.0 | 2500 | 0.4153 | 0.8867 | | 0.178 | 4.08 | 2550 | 0.4100 | 0.8933 | | 0.1621 | 4.16 | 2600 | 0.4292 | 0.8933 | | 0.1682 | 4.24 | 2650 | 0.4602 | 0.8933 | | 0.1813 | 4.32 | 2700 | 0.4680 | 0.8933 | | 0.2033 | 4.4 | 2750 | 0.4735 | 0.8933 | | 0.1662 | 4.48 | 2800 | 0.4750 | 0.88 | | 0.1686 | 4.56 | 2850 | 0.4830 | 0.8933 | | 0.1603 | 4.64 | 2900 | 0.4909 | 0.8933 | | 0.148 | 4.72 | 2950 | 0.4784 | 0.8933 | | 0.162 | 4.8 | 3000 | 0.4750 | 0.8867 | | 0.153 | 4.88 | 3050 | 0.4759 | 0.8867 | | 0.1657 | 4.96 | 3100 | 0.4796 | 0.8933 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.3.2 - Tokenizers 0.13.1
pig4431/amazonPolarity_DistilBERT_5E
pig4431
2022-11-06T20:58:38Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:amazon_polarity", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T20:54:32Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_polarity metrics: - accuracy model-index: - name: amazonPolarity_DistilBERT_5EE results: - task: name: Text Classification type: text-classification dataset: name: amazon_polarity type: amazon_polarity config: amazon_polarity split: train args: amazon_polarity metrics: - name: Accuracy type: accuracy value: 0.94 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazonPolarity_DistilBERT_5EE This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.2899 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6581 | 0.03 | 50 | 0.5315 | 0.84 | | 0.4321 | 0.05 | 100 | 0.2897 | 0.8933 | | 0.298 | 0.08 | 150 | 0.3165 | 0.8667 | | 0.2902 | 0.11 | 200 | 0.2552 | 0.9067 | | 0.2824 | 0.13 | 250 | 0.2277 | 0.9133 | | 0.2522 | 0.16 | 300 | 0.1998 | 0.94 | | 0.2781 | 0.19 | 350 | 0.1933 | 0.94 | | 0.2668 | 0.21 | 400 | 0.2316 | 0.92 | | 0.2619 | 0.24 | 450 | 0.1968 | 0.9333 | | 0.2446 | 0.27 | 500 | 0.1846 | 0.9467 | | 0.2677 | 0.29 | 550 | 0.1818 | 0.94 | | 0.2026 | 0.32 | 600 | 0.2348 | 0.9133 | | 0.2351 | 0.35 | 650 | 0.2127 | 0.92 | | 0.2685 | 0.37 | 700 | 0.1792 | 0.94 | | 0.2141 | 0.4 | 750 | 0.2252 | 0.9133 | | 0.2193 | 0.43 | 800 | 0.2131 | 0.9267 | | 0.2456 | 0.45 | 850 | 0.2205 | 0.9133 | | 0.2548 | 0.48 | 900 | 0.1788 | 0.94 | | 0.2353 | 0.51 | 950 | 0.1954 | 0.9267 | | 0.2546 | 0.53 | 1000 | 0.1815 | 0.9333 | | 0.2583 | 0.56 | 1050 | 0.1654 | 0.9333 | | 0.219 | 0.59 | 1100 | 0.1760 | 0.9467 | | 0.2241 | 0.61 | 1150 | 0.2107 | 0.92 | | 0.2201 | 0.64 | 1200 | 0.2381 | 0.8933 | | 0.1745 | 0.67 | 1250 | 0.1944 | 0.92 | | 0.2698 | 0.69 | 1300 | 0.1971 | 0.9267 | | 0.214 | 0.72 | 1350 | 0.1944 | 0.9333 | | 0.2436 | 0.75 | 1400 | 0.2079 | 0.92 | | 0.2318 | 0.77 | 1450 | 0.2088 | 0.9333 | | 0.2206 | 0.8 | 1500 | 0.1875 | 0.94 | | 0.2593 | 0.83 | 1550 | 0.1797 | 0.9267 | | 0.1908 | 0.85 | 1600 | 0.1924 | 0.9333 | | 0.2378 | 0.88 | 1650 | 0.1649 | 0.9267 | | 0.2332 | 0.91 | 1700 | 0.1768 | 0.94 | | 0.2125 | 0.93 | 1750 | 0.2276 | 0.92 | | 0.2174 | 0.96 | 1800 | 0.2035 | 0.9333 | | 0.19 | 0.99 | 1850 | 0.1805 | 0.94 | | 0.1515 | 1.01 | 1900 | 0.1832 | 0.94 | | 0.1671 | 1.04 | 1950 | 0.1902 | 0.94 | | 0.171 | 1.07 | 2000 | 0.2468 | 0.9267 | | 0.1495 | 1.09 | 2050 | 0.2276 | 0.9267 | | 0.1535 | 1.12 | 2100 | 0.1926 | 0.94 | | 0.2085 | 1.15 | 2150 | 0.1878 | 0.94 | | 0.1395 | 1.17 | 2200 | 0.1795 | 0.9467 | | 0.1556 | 1.2 | 2250 | 0.1554 | 0.9467 | | 0.1273 | 1.23 | 2300 | 0.1707 | 0.94 | | 0.1873 | 1.25 | 2350 | 0.1867 | 0.9467 | | 0.1589 | 1.28 | 2400 | 0.2089 | 0.9333 | | 0.1426 | 1.31 | 2450 | 0.1797 | 0.9467 | | 0.149 | 1.33 | 2500 | 0.1991 | 0.9333 | | 0.1535 | 1.36 | 2550 | 0.2116 | 0.94 | | 0.1671 | 1.39 | 2600 | 0.1704 | 0.9467 | | 0.1582 | 1.41 | 2650 | 0.1843 | 0.94 | | 0.1393 | 1.44 | 2700 | 0.1831 | 0.94 | | 0.1474 | 1.47 | 2750 | 0.1895 | 0.94 | | 0.203 | 1.49 | 2800 | 0.1843 | 0.9467 | | 0.1562 | 1.52 | 2850 | 0.2060 | 0.9467 | | 0.1886 | 1.55 | 2900 | 0.1837 | 0.94 | | 0.1332 | 1.57 | 2950 | 0.1920 | 0.9467 | | 0.1519 | 1.6 | 3000 | 0.1789 | 0.9533 | | 0.1354 | 1.63 | 3050 | 0.1974 | 0.9467 | | 0.125 | 1.65 | 3100 | 0.1890 | 0.9533 | | 0.2044 | 1.68 | 3150 | 0.1755 | 0.9533 | | 0.1746 | 1.71 | 3200 | 0.1607 | 0.9467 | | 0.1981 | 1.73 | 3250 | 0.1613 | 0.9533 | | 0.1276 | 1.76 | 3300 | 0.1825 | 0.96 | | 0.1935 | 1.79 | 3350 | 0.1707 | 0.9533 | | 0.1848 | 1.81 | 3400 | 0.1697 | 0.96 | | 0.1596 | 1.84 | 3450 | 0.1581 | 0.9667 | | 0.1797 | 1.87 | 3500 | 0.1634 | 0.96 | | 0.1493 | 1.89 | 3550 | 0.1614 | 0.9533 | | 0.1703 | 1.92 | 3600 | 0.1673 | 0.9467 | | 0.1951 | 1.95 | 3650 | 0.1589 | 0.9533 | | 0.1582 | 1.97 | 3700 | 0.1761 | 0.9467 | | 0.1974 | 2.0 | 3750 | 0.1918 | 0.94 | | 0.1056 | 2.03 | 3800 | 0.2063 | 0.94 | | 0.1109 | 2.05 | 3850 | 0.2031 | 0.9467 | | 0.113 | 2.08 | 3900 | 0.2118 | 0.9467 | | 0.0834 | 2.11 | 3950 | 0.1974 | 0.9533 | | 0.1434 | 2.13 | 4000 | 0.2075 | 0.9533 | | 0.0691 | 2.16 | 4050 | 0.2178 | 0.9533 | | 0.1144 | 2.19 | 4100 | 0.2383 | 0.9467 | | 0.1446 | 2.21 | 4150 | 0.2207 | 0.9533 | | 0.172 | 2.24 | 4200 | 0.2034 | 0.9467 | | 0.1026 | 2.27 | 4250 | 0.2048 | 0.9467 | | 0.1131 | 2.29 | 4300 | 0.2334 | 0.9467 | | 0.121 | 2.32 | 4350 | 0.2367 | 0.9333 | | 0.1144 | 2.35 | 4400 | 0.2313 | 0.9467 | | 0.1089 | 2.37 | 4450 | 0.2352 | 0.9533 | | 0.1193 | 2.4 | 4500 | 0.2440 | 0.94 | | 0.0689 | 2.43 | 4550 | 0.2379 | 0.9333 | | 0.1799 | 2.45 | 4600 | 0.2354 | 0.9467 | | 0.1068 | 2.48 | 4650 | 0.2158 | 0.9533 | | 0.0974 | 2.51 | 4700 | 0.2456 | 0.94 | | 0.0637 | 2.53 | 4750 | 0.2191 | 0.9333 | | 0.1125 | 2.56 | 4800 | 0.2390 | 0.9467 | | 0.1706 | 2.59 | 4850 | 0.2407 | 0.94 | | 0.1533 | 2.61 | 4900 | 0.2242 | 0.9533 | | 0.1357 | 2.64 | 4950 | 0.2119 | 0.9533 | | 0.1342 | 2.67 | 5000 | 0.2268 | 0.9467 | | 0.0796 | 2.69 | 5050 | 0.2450 | 0.9467 | | 0.1351 | 2.72 | 5100 | 0.2499 | 0.94 | | 0.1285 | 2.75 | 5150 | 0.2252 | 0.94 | | 0.1563 | 2.77 | 5200 | 0.2191 | 0.94 | | 0.1022 | 2.8 | 5250 | 0.2256 | 0.9533 | | 0.11 | 2.83 | 5300 | 0.2365 | 0.9467 | | 0.0926 | 2.85 | 5350 | 0.2206 | 0.9467 | | 0.1043 | 2.88 | 5400 | 0.2018 | 0.9533 | | 0.1041 | 2.91 | 5450 | 0.2268 | 0.9467 | | 0.1232 | 2.93 | 5500 | 0.2164 | 0.9467 | | 0.1537 | 2.96 | 5550 | 0.1956 | 0.9533 | | 0.1188 | 2.99 | 5600 | 0.2126 | 0.9467 | | 0.0749 | 3.01 | 5650 | 0.2249 | 0.9467 | | 0.062 | 3.04 | 5700 | 0.2254 | 0.9467 | | 0.0755 | 3.07 | 5750 | 0.2472 | 0.94 | | 0.0866 | 3.09 | 5800 | 0.2569 | 0.94 | | 0.0502 | 3.12 | 5850 | 0.2481 | 0.9467 | | 0.1158 | 3.15 | 5900 | 0.2457 | 0.94 | | 0.0413 | 3.17 | 5950 | 0.2500 | 0.94 | | 0.0966 | 3.2 | 6000 | 0.2851 | 0.9333 | | 0.0613 | 3.23 | 6050 | 0.2717 | 0.9467 | | 0.1029 | 3.25 | 6100 | 0.2714 | 0.94 | | 0.0833 | 3.28 | 6150 | 0.2683 | 0.94 | | 0.0928 | 3.31 | 6200 | 0.2490 | 0.9467 | | 0.0571 | 3.33 | 6250 | 0.2575 | 0.9533 | | 0.1252 | 3.36 | 6300 | 0.2599 | 0.9467 | | 0.0788 | 3.39 | 6350 | 0.2522 | 0.9467 | | 0.0862 | 3.41 | 6400 | 0.2489 | 0.9533 | | 0.112 | 3.44 | 6450 | 0.2452 | 0.9533 | | 0.0868 | 3.47 | 6500 | 0.2438 | 0.9533 | | 0.0979 | 3.49 | 6550 | 0.2474 | 0.94 | | 0.0739 | 3.52 | 6600 | 0.2508 | 0.94 | | 0.0786 | 3.55 | 6650 | 0.2621 | 0.94 | | 0.0872 | 3.57 | 6700 | 0.2543 | 0.9333 | | 0.0962 | 3.6 | 6750 | 0.2347 | 0.9467 | | 0.124 | 3.63 | 6800 | 0.2319 | 0.9533 | | 0.0747 | 3.65 | 6850 | 0.2448 | 0.9533 | | 0.0591 | 3.68 | 6900 | 0.2379 | 0.94 | | 0.1049 | 3.71 | 6950 | 0.2493 | 0.9333 | | 0.0772 | 3.73 | 7000 | 0.2429 | 0.94 | | 0.071 | 3.76 | 7050 | 0.2558 | 0.94 | | 0.1116 | 3.79 | 7100 | 0.2600 | 0.94 | | 0.1199 | 3.81 | 7150 | 0.2480 | 0.94 | | 0.0819 | 3.84 | 7200 | 0.2506 | 0.94 | | 0.1054 | 3.87 | 7250 | 0.2431 | 0.94 | | 0.09 | 3.89 | 7300 | 0.2582 | 0.9333 | | 0.0936 | 3.92 | 7350 | 0.2460 | 0.94 | | 0.0469 | 3.95 | 7400 | 0.2509 | 0.94 | | 0.1101 | 3.97 | 7450 | 0.2545 | 0.9467 | | 0.1077 | 4.0 | 7500 | 0.2640 | 0.9467 | | 0.0777 | 4.03 | 7550 | 0.2709 | 0.94 | | 0.0777 | 4.05 | 7600 | 0.2842 | 0.94 | | 0.0847 | 4.08 | 7650 | 0.2649 | 0.94 | | 0.0462 | 4.11 | 7700 | 0.2702 | 0.9467 | | 0.0572 | 4.13 | 7750 | 0.2628 | 0.94 | | 0.0435 | 4.16 | 7800 | 0.2689 | 0.9467 | | 0.0566 | 4.19 | 7850 | 0.2727 | 0.9467 | | 0.1149 | 4.21 | 7900 | 0.2635 | 0.9467 | | 0.0557 | 4.24 | 7950 | 0.2665 | 0.9467 | | 0.061 | 4.27 | 8000 | 0.2680 | 0.9467 | | 0.0664 | 4.29 | 8050 | 0.2767 | 0.9467 | | 0.0481 | 4.32 | 8100 | 0.2662 | 0.9467 | | 0.0893 | 4.35 | 8150 | 0.2677 | 0.9467 | | 0.0855 | 4.37 | 8200 | 0.2733 | 0.9467 | | 0.0552 | 4.4 | 8250 | 0.2589 | 0.94 | | 0.0469 | 4.43 | 8300 | 0.2733 | 0.94 | | 0.0633 | 4.45 | 8350 | 0.2799 | 0.94 | | 0.0629 | 4.48 | 8400 | 0.2838 | 0.94 | | 0.0854 | 4.51 | 8450 | 0.2837 | 0.94 | | 0.0596 | 4.53 | 8500 | 0.2808 | 0.94 | | 0.0579 | 4.56 | 8550 | 0.2839 | 0.94 | | 0.0508 | 4.59 | 8600 | 0.2844 | 0.94 | | 0.0557 | 4.61 | 8650 | 0.2833 | 0.94 | | 0.0383 | 4.64 | 8700 | 0.2878 | 0.94 | | 0.0554 | 4.67 | 8750 | 0.2924 | 0.94 | | 0.0681 | 4.69 | 8800 | 0.2868 | 0.94 | | 0.065 | 4.72 | 8850 | 0.2888 | 0.94 | | 0.0731 | 4.75 | 8900 | 0.2946 | 0.94 | | 0.0638 | 4.77 | 8950 | 0.2886 | 0.94 | | 0.043 | 4.8 | 9000 | 0.2867 | 0.94 | | 0.0658 | 4.83 | 9050 | 0.2872 | 0.94 | | 0.0249 | 4.85 | 9100 | 0.2882 | 0.94 | | 0.0612 | 4.88 | 9150 | 0.2902 | 0.94 | | 0.0271 | 4.91 | 9200 | 0.2890 | 0.94 | | 0.0308 | 4.93 | 9250 | 0.2897 | 0.94 | | 0.0896 | 4.96 | 9300 | 0.2898 | 0.94 | | 0.1172 | 4.99 | 9350 | 0.2899 | 0.94 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pig4431/IMDB_DistilBERT_5E
pig4431
2022-11-06T20:41:02Z
109
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T20:36:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: IMDB_DistilBERT_5EE results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.94 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IMDB_DistilBERT_5EE This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2023 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6748 | 0.03 | 50 | 0.5955 | 0.88 | | 0.4404 | 0.06 | 100 | 0.2853 | 0.9 | | 0.3065 | 0.1 | 150 | 0.2208 | 0.9 | | 0.3083 | 0.13 | 200 | 0.2023 | 0.9333 | | 0.2922 | 0.16 | 250 | 0.1530 | 0.94 | | 0.2761 | 0.19 | 300 | 0.2035 | 0.9267 | | 0.2145 | 0.22 | 350 | 0.2450 | 0.9 | | 0.258 | 0.26 | 400 | 0.1680 | 0.9267 | | 0.2702 | 0.29 | 450 | 0.1607 | 0.9333 | | 0.2587 | 0.32 | 500 | 0.1496 | 0.9467 | | 0.2822 | 0.35 | 550 | 0.1405 | 0.9333 | | 0.2538 | 0.38 | 600 | 0.1396 | 0.9467 | | 0.2707 | 0.42 | 650 | 0.1626 | 0.9333 | | 0.2408 | 0.45 | 700 | 0.1623 | 0.9067 | | 0.2531 | 0.48 | 750 | 0.1300 | 0.9467 | | 0.2014 | 0.51 | 800 | 0.1529 | 0.9333 | | 0.2454 | 0.54 | 850 | 0.1365 | 0.94 | | 0.2282 | 0.58 | 900 | 0.1447 | 0.9533 | | 0.2554 | 0.61 | 950 | 0.1321 | 0.9467 | | 0.24 | 0.64 | 1000 | 0.1256 | 0.9467 | | 0.2239 | 0.67 | 1050 | 0.1290 | 0.9467 | | 0.2865 | 0.7 | 1100 | 0.1288 | 0.9667 | | 0.2456 | 0.74 | 1150 | 0.1299 | 0.9533 | | 0.2407 | 0.77 | 1200 | 0.1565 | 0.9267 | | 0.2256 | 0.8 | 1250 | 0.1262 | 0.96 | | 0.238 | 0.83 | 1300 | 0.1599 | 0.9333 | | 0.2151 | 0.86 | 1350 | 0.1252 | 0.9333 | | 0.187 | 0.9 | 1400 | 0.1132 | 0.9467 | | 0.2218 | 0.93 | 1450 | 0.1030 | 0.9533 | | 0.2371 | 0.96 | 1500 | 0.1036 | 0.9467 | | 0.2264 | 0.99 | 1550 | 0.1041 | 0.9467 | | 0.2159 | 1.02 | 1600 | 0.1338 | 0.9267 | | 0.1773 | 1.06 | 1650 | 0.1218 | 0.94 | | 0.1381 | 1.09 | 1700 | 0.1593 | 0.94 | | 0.1582 | 1.12 | 1750 | 0.1445 | 0.9533 | | 0.1921 | 1.15 | 1800 | 0.1355 | 0.94 | | 0.206 | 1.18 | 1850 | 0.1511 | 0.9467 | | 0.1679 | 1.22 | 1900 | 0.1394 | 0.94 | | 0.1691 | 1.25 | 1950 | 0.1403 | 0.9333 | | 0.2301 | 1.28 | 2000 | 0.1169 | 0.9467 | | 0.1764 | 1.31 | 2050 | 0.1507 | 0.9333 | | 0.1772 | 1.34 | 2100 | 0.1148 | 0.96 | | 0.1749 | 1.38 | 2150 | 0.1203 | 0.94 | | 0.1912 | 1.41 | 2200 | 0.1037 | 0.94 | | 0.1614 | 1.44 | 2250 | 0.1006 | 0.9533 | | 0.1975 | 1.47 | 2300 | 0.0985 | 0.9533 | | 0.1843 | 1.5 | 2350 | 0.0922 | 0.9533 | | 0.1764 | 1.54 | 2400 | 0.1259 | 0.9467 | | 0.1855 | 1.57 | 2450 | 0.1243 | 0.96 | | 0.1272 | 1.6 | 2500 | 0.2107 | 0.9267 | | 0.241 | 1.63 | 2550 | 0.1142 | 0.9533 | | 0.1584 | 1.66 | 2600 | 0.1194 | 0.9467 | | 0.1568 | 1.7 | 2650 | 0.1196 | 0.9533 | | 0.1896 | 1.73 | 2700 | 0.1311 | 0.9533 | | 0.143 | 1.76 | 2750 | 0.1140 | 0.9533 | | 0.227 | 1.79 | 2800 | 0.1482 | 0.9333 | | 0.1404 | 1.82 | 2850 | 0.1366 | 0.94 | | 0.1865 | 1.86 | 2900 | 0.1174 | 0.94 | | 0.1659 | 1.89 | 2950 | 0.1189 | 0.94 | | 0.1882 | 1.92 | 3000 | 0.1144 | 0.9467 | | 0.1403 | 1.95 | 3050 | 0.1358 | 0.94 | | 0.2193 | 1.98 | 3100 | 0.1092 | 0.9533 | | 0.1392 | 2.02 | 3150 | 0.1278 | 0.9267 | | 0.1292 | 2.05 | 3200 | 0.1186 | 0.96 | | 0.0939 | 2.08 | 3250 | 0.1183 | 0.94 | | 0.1356 | 2.11 | 3300 | 0.1939 | 0.94 | | 0.1175 | 2.14 | 3350 | 0.1499 | 0.94 | | 0.1285 | 2.18 | 3400 | 0.1538 | 0.94 | | 0.1018 | 2.21 | 3450 | 0.1796 | 0.9333 | | 0.1342 | 2.24 | 3500 | 0.1540 | 0.94 | | 0.17 | 2.27 | 3550 | 0.1261 | 0.94 | | 0.1548 | 2.3 | 3600 | 0.1375 | 0.9267 | | 0.1415 | 2.34 | 3650 | 0.1264 | 0.9333 | | 0.1096 | 2.37 | 3700 | 0.1252 | 0.9333 | | 0.1001 | 2.4 | 3750 | 0.1546 | 0.94 | | 0.0934 | 2.43 | 3800 | 0.1534 | 0.94 | | 0.1287 | 2.46 | 3850 | 0.1735 | 0.9333 | | 0.0872 | 2.5 | 3900 | 0.1475 | 0.9467 | | 0.0994 | 2.53 | 3950 | 0.1735 | 0.9467 | | 0.1558 | 2.56 | 4000 | 0.1585 | 0.9467 | | 0.1517 | 2.59 | 4050 | 0.2021 | 0.9333 | | 0.1246 | 2.62 | 4100 | 0.1594 | 0.9267 | | 0.1228 | 2.66 | 4150 | 0.1338 | 0.9533 | | 0.1064 | 2.69 | 4200 | 0.1421 | 0.9467 | | 0.1466 | 2.72 | 4250 | 0.1383 | 0.9467 | | 0.1243 | 2.75 | 4300 | 0.1604 | 0.9533 | | 0.1434 | 2.78 | 4350 | 0.1736 | 0.9333 | | 0.1127 | 2.82 | 4400 | 0.1909 | 0.9267 | | 0.0908 | 2.85 | 4450 | 0.1958 | 0.9333 | | 0.1134 | 2.88 | 4500 | 0.1596 | 0.94 | | 0.1345 | 2.91 | 4550 | 0.1604 | 0.9533 | | 0.1913 | 2.94 | 4600 | 0.1852 | 0.9267 | | 0.1382 | 2.98 | 4650 | 0.1852 | 0.9333 | | 0.1109 | 3.01 | 4700 | 0.1905 | 0.9333 | | 0.1144 | 3.04 | 4750 | 0.1655 | 0.94 | | 0.074 | 3.07 | 4800 | 0.2034 | 0.9333 | | 0.0926 | 3.1 | 4850 | 0.1929 | 0.94 | | 0.0911 | 3.13 | 4900 | 0.1703 | 0.9333 | | 0.0933 | 3.17 | 4950 | 0.1826 | 0.9333 | | 0.1003 | 3.2 | 5000 | 0.1716 | 0.94 | | 0.0889 | 3.23 | 5050 | 0.1843 | 0.9267 | | 0.0841 | 3.26 | 5100 | 0.1670 | 0.94 | | 0.0918 | 3.29 | 5150 | 0.1595 | 0.9467 | | 0.0795 | 3.33 | 5200 | 0.1504 | 0.96 | | 0.0978 | 3.36 | 5250 | 0.1317 | 0.96 | | 0.1202 | 3.39 | 5300 | 0.1641 | 0.9533 | | 0.0935 | 3.42 | 5350 | 0.1473 | 0.96 | | 0.0673 | 3.45 | 5400 | 0.1684 | 0.9533 | | 0.0729 | 3.49 | 5450 | 0.1414 | 0.9533 | | 0.077 | 3.52 | 5500 | 0.1669 | 0.9533 | | 0.1264 | 3.55 | 5550 | 0.1364 | 0.96 | | 0.1282 | 3.58 | 5600 | 0.1575 | 0.9467 | | 0.0553 | 3.61 | 5650 | 0.1440 | 0.9467 | | 0.0953 | 3.65 | 5700 | 0.1526 | 0.9533 | | 0.0886 | 3.68 | 5750 | 0.1633 | 0.94 | | 0.0901 | 3.71 | 5800 | 0.1704 | 0.9467 | | 0.0986 | 3.74 | 5850 | 0.1674 | 0.94 | | 0.0849 | 3.77 | 5900 | 0.1989 | 0.9333 | | 0.0815 | 3.81 | 5950 | 0.1942 | 0.94 | | 0.0973 | 3.84 | 6000 | 0.1611 | 0.94 | | 0.0599 | 3.87 | 6050 | 0.1807 | 0.9267 | | 0.1068 | 3.9 | 6100 | 0.1966 | 0.94 | | 0.0889 | 3.93 | 6150 | 0.1979 | 0.9333 | | 0.0854 | 3.97 | 6200 | 0.2012 | 0.9333 | | 0.1207 | 4.0 | 6250 | 0.1983 | 0.9333 | | 0.0735 | 4.03 | 6300 | 0.1795 | 0.94 | | 0.1148 | 4.06 | 6350 | 0.1966 | 0.94 | | 0.0725 | 4.09 | 6400 | 0.2290 | 0.94 | | 0.0576 | 4.13 | 6450 | 0.1936 | 0.9333 | | 0.0477 | 4.16 | 6500 | 0.2090 | 0.9333 | | 0.0722 | 4.19 | 6550 | 0.1878 | 0.9333 | | 0.0936 | 4.22 | 6600 | 0.2087 | 0.94 | | 0.0715 | 4.25 | 6650 | 0.2040 | 0.94 | | 0.0586 | 4.29 | 6700 | 0.1862 | 0.9333 | | 0.0548 | 4.32 | 6750 | 0.1801 | 0.9267 | | 0.0527 | 4.35 | 6800 | 0.1912 | 0.9333 | | 0.0813 | 4.38 | 6850 | 0.1941 | 0.9333 | | 0.0531 | 4.41 | 6900 | 0.1932 | 0.9267 | | 0.0606 | 4.45 | 6950 | 0.2195 | 0.94 | | 0.1213 | 4.48 | 7000 | 0.1975 | 0.9333 | | 0.0807 | 4.51 | 7050 | 0.1915 | 0.9333 | | 0.076 | 4.54 | 7100 | 0.1987 | 0.9333 | | 0.0595 | 4.57 | 7150 | 0.2052 | 0.9333 | | 0.0832 | 4.61 | 7200 | 0.2039 | 0.9333 | | 0.0657 | 4.64 | 7250 | 0.2186 | 0.94 | | 0.0684 | 4.67 | 7300 | 0.2063 | 0.94 | | 0.0429 | 4.7 | 7350 | 0.2056 | 0.94 | | 0.0531 | 4.73 | 7400 | 0.2139 | 0.94 | | 0.0556 | 4.77 | 7450 | 0.2153 | 0.94 | | 0.0824 | 4.8 | 7500 | 0.2010 | 0.9333 | | 0.039 | 4.83 | 7550 | 0.2079 | 0.94 | | 0.068 | 4.86 | 7600 | 0.2140 | 0.94 | | 0.065 | 4.89 | 7650 | 0.2108 | 0.94 | | 0.0359 | 4.93 | 7700 | 0.2058 | 0.94 | | 0.0592 | 4.96 | 7750 | 0.2029 | 0.94 | | 0.0793 | 4.99 | 7800 | 0.2023 | 0.94 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
yunan/ddpm-butterflies-128
yunan
2022-11-06T20:20:17Z
1
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-06T19:06:04Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/yunan/ddpm-butterflies-128/tensorboard?#scalars)
andrewkroening/GalaxyFarAway-DialoGPT-LeiaOrgana
andrewkroening
2022-11-06T20:13:52Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T20:12:57Z
--- language: en tags: - conversational license: cc --- # GPT-2 This model is based on a GPT-2 model which was fine-tuned on a Hugging Face dataset. It is intended largely as an illustrative example and is not intended to be used for any serious purpose. It's trained on a movie script for goodness' sake. Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Acknowledgements There are several sources of inspiration and insight for the project that spawned this model. I'd like to recognize them up front: * The [Microsoft DialoGPT-Medium](https://huggingface.co/microsoft/DialoGPT-medium?text=Hi.) model page was very insightful for getting stated. * Lynn Zheng [r3dhummingbird](https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua?text=Hey+my+name+is+Thomas%21+How+are+you%3F) put together one heck of an awesome tutorial on how to fine-tune GPT-2 for conversational purposes. I used her tutorial as a starting point for this project. Check out the [Github repo here.](https://github.com/RuolinZheng08/twewy-discord-chatbot) * [This article](https://towardsdatascience.com/make-your-own-rick-sanchez-bot-with-transformers-and-dialogpt-fine-tuning-f85e6d1f4e30) was also very insightful. Written by Rostyslav Neskorozhenyi. * From a lineage standpoint, it looks like Nathan Cooper kicked this whole thing off with this [notebook.](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) * Noah Gift figured out a few of the big pieces in [this repository.](https://github.com/nogibjj/hugging-face-tutorial-practice) * I'd be remiss if I also didn't mention Hugging Face's own support [documentation](https://huggingface.co/transformers/v2.0.0/examples.html#gpt-2-gpt-and-causal-language-modeling) and team. All around great. ## Model description This model uses GPT-2 Medium as a base model and was fine-tuned using scripts from the original (and best) Star Wars Trilogy. In this particular case, it was fine-tuned on Leia Organa's 220-some lines. This is not a lot, and thus the model should not be assumed to have serious integrity. It's just a fun little project. ## Intended uses & limitations This model is intended to be used for fun and entertainment. Don't take it too seriously. ### Ways to use You can always chat with the model directly on the Hugging Face website. Just click the "Chat" button on the right side of the model page. If you want to use the model in your own project, I recommend you train it better using much more data. To access the GitHub repository I used to train this model, click [here](https://github.com/nogibjj/hugging-face-gpt-trainer/tree/gpt-fine-tune) ## Fine-tuning data The script to generate this model takes a Hugging Face data set in this approximate format: | Speaker | Text | | --- | --- | | Luke | Hello there. | | Han | General Kenobi. | | Luke | You are a bold one. | The script then asks the user to define parameters for making the dataset and proceeding to fine-tuning. The actual dataset for this model can be found [here.](andrewkroening/Star-wars-scripts-dialogue-IV-VI)
andrewkroening/GalaxyFarAway-DialoGPT-Threepio
andrewkroening
2022-11-06T20:02:42Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T19:56:30Z
--- language: en tags: - conversational license: cc --- # GPT-2 This model is based on a GPT-2 model which was fine-tuned on a Hugging Face dataset. It is intended largely as an illustrative example and is not intended to be used for any serious purpose. It's trained on a movie script for goodness' sake. Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Acknowledgements There are several sources of inspiration and insight for the project that spawned this model. I'd like to recognize them up front: * The [Microsoft DialoGPT-Medium](https://huggingface.co/microsoft/DialoGPT-medium?text=Hi.) model page was very insightful for getting stated. * Lynn Zheng [r3dhummingbird](https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua?text=Hey+my+name+is+Thomas%21+How+are+you%3F) put together one heck of an awesome tutorial on how to fine-tune GPT-2 for conversational purposes. I used her tutorial as a starting point for this project. Check out the [Github repo here.](https://github.com/RuolinZheng08/twewy-discord-chatbot) * [This article](https://towardsdatascience.com/make-your-own-rick-sanchez-bot-with-transformers-and-dialogpt-fine-tuning-f85e6d1f4e30) was also very insightful. Written by Rostyslav Neskorozhenyi. * From a lineage standpoint, it looks like Nathan Cooper kicked this whole thing off with this [notebook.](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) * Noah Gift figured out a few of the big pieces in [this repository.](https://github.com/nogibjj/hugging-face-tutorial-practice) * I'd be remiss if I also didn't mention Hugging Face's own support [documentation](https://huggingface.co/transformers/v2.0.0/examples.html#gpt-2-gpt-and-causal-language-modeling) and team. All around great. ## Model description This model uses GPT-2 Medium as a base model and was fine-tuned using scripts from the original (and best) Star Wars Trilogy. In this particular case, it was fine-tuned on C3PO's 300-some lines. This is not a lot, and thus the model should not be assumed to have serious integrity. It's just a fun little project. ## Intended uses & limitations This model is intended to be used for fun and entertainment. Don't take it too seriously. ### Ways to use You can always chat with the model directly on the Hugging Face website. Just click the "Chat" button on the right side of the model page. If you want to use the model in your own project, I recommend you train it better using much more data. To access the GitHub repository I used to train this model, click [here](https://github.com/nogibjj/hugging-face-gpt-trainer/tree/gpt-fine-tune) ## Fine-tuning data The script to generate this model takes a Hugging Face data set in this approximate format: | Speaker | Text | | --- | --- | | Luke | Hello there. | | Han | General Kenobi. | | Luke | You are a bold one. | The script then asks the user to define parameters for making the dataset and proceeding to fine-tuning. The actual dataset for this model can be found [here.](andrewkroening/Star-wars-scripts-dialogue-IV-VI)
ZenzoHaigoshima/ZenzoHaigoshima
ZenzoHaigoshima
2022-11-06T20:01:28Z
0
0
null
[ "region:us" ]
null
2022-11-06T19:56:12Z
![Snapshot_011.png](https://s3.amazonaws.com/moonup/production/uploads/1667764879029-6368114f835ff6ed8be1b7a6.png)
andrewkroening/GalaxyFarAway-DialoGPT-LukeSkywalker
andrewkroening
2022-11-06T19:50:52Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T19:48:55Z
--- language: en tags: - conversational license: cc --- # GPT-2 This model is based on a GPT-2 model which was fine-tuned on a Hugging Face dataset. It is intended largely as an illustrative example and is not intended to be used for any serious purpose. It's trained on a movie script for goodness' sake. Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Acknowledgements There are several sources of inspiration and insight for the project that spawned this model. I'd like to recognize them up front: * The [Microsoft DialoGPT-Medium](https://huggingface.co/microsoft/DialoGPT-medium?text=Hi.) model page was very insightful for getting stated. * Lynn Zheng [r3dhummingbird](https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua?text=Hey+my+name+is+Thomas%21+How+are+you%3F) put together one heck of an awesome tutorial on how to fine-tune GPT-2 for conversational purposes. I used her tutorial as a starting point for this project. Check out the [Github repo here.](https://github.com/RuolinZheng08/twewy-discord-chatbot) * [This article](https://towardsdatascience.com/make-your-own-rick-sanchez-bot-with-transformers-and-dialogpt-fine-tuning-f85e6d1f4e30) was also very insightful. Written by Rostyslav Neskorozhenyi. * From a lineage standpoint, it looks like Nathan Cooper kicked this whole thing off with this [notebook.](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) * Noah Gift figured out a few of the big pieces in [this repository.](https://github.com/nogibjj/hugging-face-tutorial-practice) * I'd be remiss if I also didn't mention Hugging Face's own support [documentation](https://huggingface.co/transformers/v2.0.0/examples.html#gpt-2-gpt-and-causal-language-modeling) and team. All around great. ## Model description This model uses GPT-2 Medium as a base model and was fine-tuned using scripts from the original (and best) Star Wars Trilogy. In this particular case, it was fine-tuned on Luke Skywalker's 490-some lines. This is not a lot, and thus the model should not be assumed to have serious integrity. It's just a fun little project. ## Intended uses & limitations This model is intended to be used for fun and entertainment. Don't take it too seriously. ### Ways to use You can always chat with the model directly on the Hugging Face website. Just click the "Chat" button on the right side of the model page. If you want to use the model in your own project, I recommend you train it better using much more data. To access the GitHub repository I used to train this model, click [here](https://github.com/nogibjj/hugging-face-gpt-trainer/tree/gpt-fine-tune) ## Fine-tuning data The script to generate this model takes a Hugging Face data set in this approximate format: | Speaker | Text | | --- | --- | | Luke | Hello there. | | Han | General Kenobi. | | Luke | You are a bold one. | The script then asks the user to define parameters for making the dataset and proceeding to fine-tuning. The actual dataset for this model can be found [here.](andrewkroening/Star-wars-scripts-dialogue-IV-VI)
halflings/house_price_prediction_ser
halflings
2022-11-06T19:40:06Z
0
2
mlconsole
[ "mlconsole", "tabular-regression", "dataset:house_price_prediction", "license:unknown", "model-index", "region:us" ]
tabular-regression
2022-11-06T19:40:02Z
--- license: unknown inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - house_price_prediction model-index: - name: house_price_prediction_ser results: - task: type: tabular-regression name: tabular-regression dataset: type: house_price_prediction name: house_price_prediction metrics: - type: mae name: Mean absolute error value: 5.011783599853516 - type: loss name: Model loss value: 43.01755905151367 --- # regression model trained on "house_price_prediction" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/house_price_prediction_ser) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
pig4431/IMDB_XLNET_5E
pig4431
2022-11-06T19:29:12Z
5
0
transformers
[ "transformers", "pytorch", "xlnet", "text-classification", "generated_from_trainer", "dataset:imdb", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T19:24:11Z
--- license: mit tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: IMDB_XLNET_5E results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.94 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IMDB_XLNET_5E This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3195 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3192 | 0.63 | 50 | 0.2033 | 0.94 | | 0.196 | 1.27 | 100 | 0.2036 | 0.9467 | | 0.1651 | 1.9 | 150 | 0.2106 | 0.9267 | | 0.0628 | 2.53 | 200 | 0.3531 | 0.92 | | 0.0865 | 3.16 | 250 | 0.2186 | 0.9533 | | 0.0436 | 3.8 | 300 | 0.2718 | 0.9533 | | 0.0254 | 4.43 | 350 | 0.3195 | 0.94 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
syuCream/A
syuCream
2022-11-06T18:59:55Z
0
0
null
[ "region:us" ]
null
2022-11-06T18:58:27Z
import torch from diffusers import StableDiffusionPipeline from torch import autocast MODEL_ID = "CompVis/stable-diffusion-v1-4" DEVICE = "cuda" YOUR_TOKEN = "コピーしたアクセストークン" pipe = StableDiffusionPipeline.from_pretrained(MODEL_ID, revision="fp16", torch_dtype=torch.float16, use_auth_token=YOUR_TOKEN) pipe.to(DEVICE) prompt = "a dog painted by Katsuhika Hokusai" with autocast(DEVICE): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("test.png")
sd-concepts-library/terraria-style
sd-concepts-library
2022-11-06T18:59:29Z
0
12
null
[ "license:mit", "region:us" ]
null
2022-11-06T18:59:25Z
--- license: mit --- ### terraria style on Stable Diffusion This is the `<terr-sty>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<terr-sty> 0](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/6.jpeg) ![<terr-sty> 1](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/2.jpeg) ![<terr-sty> 2](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/0.jpeg) ![<terr-sty> 3](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/8.jpeg) ![<terr-sty> 4](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/3.jpeg) ![<terr-sty> 5](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/5.jpeg) ![<terr-sty> 6](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/4.jpeg) ![<terr-sty> 7](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/9.jpeg) ![<terr-sty> 8](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/1.jpeg) ![<terr-sty> 9](https://huggingface.co/sd-concepts-library/terraria-style/resolve/main/concept_images/7.jpeg)
cyburn/mollie_monger
cyburn
2022-11-06T18:27:48Z
0
0
null
[ "region:us" ]
null
2022-11-06T15:29:38Z
# mollie monget style This model will produce images styled like the great artist Hollie Mengert. By respect for the artist I have changed the name to Mollie Monger. To call on the style add ", by mollie monger" at the end of your prompt. Samples are included in the repo.
ianlaauu/fine-tuning-NLP
ianlaauu
2022-11-06T18:15:42Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:gpl-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-04T20:19:05Z
--- license: gpl-2.0 --- - Pre-trained model: Roberta - tags: GLU - datasets: MRPC
sd-concepts-library/coraline
sd-concepts-library
2022-11-06T17:06:08Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-11-06T09:24:14Z
--- license: mit --- ### coraline on Stable Diffusion This is the `coraline` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![coraline 0](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/0.png) ![coraline 1](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/1.png) ![coraline 2](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/2.png) ![coraline 3](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/3.png) ![coraline 4](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/4.png) ![coraline 5](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/5.png) ![coraline 6](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/6.png) ![coraline 7](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/7.png) ![coraline 8](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/8.png) ![coraline 9](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/9.png) ![coraline 10](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/10.png) ![coraline 11](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/11.png) ![coraline 12](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/12.png) ![coraline 13](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/13.png) ![coraline 14](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/14.png) ![coraline 15](https://huggingface.co/sd-concepts-library/coraline/resolve/main/concept_images/15.png) This is the sample created at the end of training: "a graffiti on the wall with coraline" ![coraline 16](https://huggingface.co/sd-concepts-library/coraline/resolve/main/coraline-sample.png)
dchaplinsky/uk_ner_web_trf_large
dchaplinsky
2022-11-06T16:35:38Z
5
6
spacy
[ "spacy", "token-classification", "uk", "dataset:ner-uk", "license:mit", "model-index", "region:us" ]
token-classification
2022-10-31T18:26:48Z
--- tags: - spacy - token-classification language: uk datasets: - ner-uk license: mit model-index: - name: uk_ner_web_trf_large results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9183514774 - name: NER Recall type: recall value: 0.915503876 - name: NER F Score type: f_score value: 0.9169254658 widget: - text: "Президент Володимир Зеленський пояснив, що наразі діалог із режимом Володимира путіна неможливий, адже агресор обрав курс на знищення українського народу. За словами Зеленського цей режим РФ виявляє неповагу до суверенітету і територіальної цілісності України." --- # uk_ner_web_trf_large ## Model description **uk_ner_web_trf_large** is a fine-tuned [XLM-Roberta model](https://huggingface.co/xlm-roberta-large) that is ready to use for **Named Entity Recognition** and achieves a **SoA** performance for the NER task for Ukrainian language. It outperforms another SpaCy model, [uk_core_news_trf](https://huggingface.co/ukr-models/uk_core_news_trf) on a NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PERS) and Miscellaneous (MISC). The model was fine-tuned on the [NER-UK dataset](https://github.com/lang-uk/ner-uk), released by the [lang-uk](https://lang.org.ua). Smaller transformer based model for the SpaCy is available [here](https://huggingface.co/dchaplinsky/uk_ner_web_trf_base). Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2022
dchaplinsky/uk_ner_web_trf_base
dchaplinsky
2022-11-06T16:34:16Z
8
3
spacy
[ "spacy", "token-classification", "uk", "dataset:ner-uk", "license:mit", "model-index", "region:us" ]
token-classification
2022-10-24T22:39:42Z
--- tags: - spacy - token-classification language: uk datasets: - ner-uk license: mit model-index: - name: uk_ner_web_trf_base results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8987742191 - name: NER Recall type: recall value: 0.8810077519 - name: NER F Score type: f_score value: 0.8898023096 widget: - text: "Президент Володимир Зеленський пояснив, що наразі діалог із режимом Володимира путіна неможливий, адже агресор обрав курс на знищення українського народу. За словами Зеленського цей режим РФ виявляє неповагу до суверенітету і територіальної цілісності України." --- # uk_ner_web_trf_base ## Model description **uk_ner_web_trf_base** is a fine-tuned [XLM-Roberta model](https://huggingface.co/xlm-roberta-base) that is ready to use for **Named Entity Recognition** and achieves a performance close to **SoA** for the NER task for Ukrainian language. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PERS) and Miscellaneous (MISC). The model was fine-tuned on the [NER-UK dataset](https://github.com/lang-uk/ner-uk), released by the [lang-uk](https://lang.org.ua). A bigger model, trained on xlm-roberta-large with the **State-of-the-Art** performance is available [here](https://huggingface.co/dchaplinsky/uk_ner_web_trf_large). Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2022
damikkuch/dmkch
damikkuch
2022-11-06T15:48:22Z
0
0
null
[ "license:openrail", "region:us" ]
null
2022-11-06T15:46:36Z
--- license: openrail --- git lfs install git clone https://huggingface.co/damikkuch/dmkch
Yaxin/bert-base-multilingual-cased-42-QAData
Yaxin
2022-11-06T15:37:24Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-06T15:36:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-multilingual-cased-42-QAData results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-42-QAData This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0873 - Precision: 0.4420 - Recall: 0.2887 - F1: 0.3493 - Accuracy: 0.9755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1064 | 1.0 | 3118 | 0.0873 | 0.4420 | 0.2887 | 0.3493 | 0.9755 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pere/t5-sami-oversetter
pere
2022-11-06T14:22:18Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-10-19T07:08:44Z
--- license: apache-2.0 --- # T5 Sami - Norwegian - Sami Placeholder for future model. Description is coming soon.
fgaim/tibert-base
fgaim
2022-11-06T14:12:22Z
13
1
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "ti", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ti widget: - text: "ዓቕሚ ደቂኣንስትዮ [MASK] ብግብሪ ተራእዩ" --- # BERT Base for Tigrinya Language We pre-train a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs. This repo contains the original pre-trained Flax model that was trained on a TPU v3.8 and its corresponding PyTorch version. ## Hyperparameters The hyperparameters corresponding to the model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | Seq | |------------|----|----|-----|------|------|------| | BASE | 12 | 12 | 768 | 3072 | 110M | 512 | (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.) ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title={Monolingual Pre-trained Language Models for Tigrinya}, year=2021, publisher={WiNLP 2021 at EMNLP 2021} } ```
radioactive11/flower-classification
radioactive11
2022-11-06T14:09:30Z
0
0
null
[ "doi:10.57967/hf/0096", "region:us" ]
null
2022-11-06T14:06:45Z
# flower-classifier This Project is an application of Machine Learning with python programming. - It focuses on Flower classification using deep learning concepts and machine learning algorithms. - The goal was to apply Deep learning techniques to train a flower classifier to recognize different species of flowers. Flower recognition uses the edge and colour characteristics of flower images to classify flowers. The project is broken down into multiple steps: 1. Load and preprocess the image dataset 2. Train the image classifier on your dataset 3. Use the trained classifier to predict image content > PS: Please do not forget to give this repo a star.
tatakof/q-FrozenLake-v1-4x4-noSlippery
tatakof
2022-11-06T12:36:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-06T12:36:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="franfram/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
halflings/diabetes_detection_v2
halflings
2022-11-06T11:21:56Z
0
0
mlconsole
[ "mlconsole", "tabular-classification", "dataset:diabetes_detection", "license:unknown", "model-index", "region:us" ]
tabular-classification
2022-11-06T11:21:52Z
--- license: unknown inference: false tags: - mlconsole - tabular-classification library_name: mlconsole metrics: - accuracy - loss datasets: - diabetes_detection model-index: - name: diabetes_detection_v2 results: - task: type: tabular-classification name: tabular-classification dataset: type: diabetes_detection name: diabetes_detection metrics: - type: accuracy name: Accuracy value: 0.7395833730697632 - type: loss name: Model loss value: 0.5416829586029053 --- # classification model trained on "diabetes_detection" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_v2) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
irfanns/autotrain-english-to-interlingua-translator-2002766502
irfanns
2022-11-06T10:56:33Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "translation", "en", "it", "dataset:irfanns/autotrain-data-english-to-interlingua-translator", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
translation
2022-11-06T10:44:14Z
--- tags: - autotrain - translation language: - en - it datasets: - irfanns/autotrain-data-english-to-interlingua-translator co2_eq_emissions: emissions: 19.067960229529483 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 2002766502 - CO2 Emissions (in grams): 19.0680 ## Validation Metrics - Loss: 1.241 - SacreBLEU: 42.137 - Gen len: 32.318
vanme/vmehlin_distilbert-finetuned-squad
vanme
2022-11-06T10:37:11Z
19
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-24T13:12:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: vmehlin_distilbert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vmehlin_distilbert-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1 ### co2_eq_emissions: - emissions: 49.49 g - source: eco2AI - training_time: 00:31:54 - geographical_location: Bavaria, Germany - hardware_used: Intel(R) Xeon(R) Gold 5215 CPUs (2devices) & NVIDIA A40 (1 device)
arielazzi/whisper-small-pt
arielazzi
2022-11-06T07:59:53Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-05T20:54:53Z
--- language: - pt license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small PT - Ariel Azzi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: pt, split: test' metrics: - name: Wer type: wer value: 14.344671278521048 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small PT - Ariel Azzi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2065 - Wer: 14.3447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.198 | 0.59 | 1000 | 0.2338 | 16.2424 | | 0.0933 | 1.19 | 2000 | 0.2138 | 14.9756 | | 0.082 | 1.78 | 3000 | 0.2024 | 14.2111 | | 0.0452 | 2.38 | 4000 | 0.2065 | 14.3447 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
okho0653/distilbert-base-uncased-finetuned-sst-2-english-finetuned-cad-20pc
okho0653
2022-11-06T06:51:03Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T06:40:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-cad-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst-2-english-finetuned-cad-20pc This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | No log | 1.0 | 7 | 0.0032 | 1.0 | 1.0 | | No log | 2.0 | 14 | 0.0002 | 1.0 | 1.0 | | No log | 3.0 | 21 | 0.0001 | 1.0 | 1.0 | | No log | 4.0 | 28 | 0.0001 | 1.0 | 1.0 | | No log | 5.0 | 35 | 0.0001 | 1.0 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
okho0653/distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc
okho0653
2022-11-06T06:40:16Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T06:27:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5078 - Accuracy: 0.8333 - F1: 0.3721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 41 | 0.3986 | 0.8272 | 0.0667 | | No log | 2.0 | 82 | 0.3829 | 0.8519 | 0.4 | | No log | 3.0 | 123 | 0.4916 | 0.8333 | 0.2286 | | No log | 4.0 | 164 | 0.4894 | 0.8333 | 0.4490 | | No log | 5.0 | 205 | 0.5078 | 0.8333 | 0.3721 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
munjulurik/autoShots
munjulurik
2022-11-06T06:34:39Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "news", "summarizer", "inshorts", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-02T04:24:04Z
--- language: - en tags: - t5 - news - summarizer - inshorts --- ## Model description AutoShots is a news summariser model, built by mimicking InShorts application, which manually summarises news into ~60 words. It is a T5-Small model, that has been fine tuned with data scraped from Inshorts website. Disclaimer: This model and the use of Inshorts data was solely for research and learning perspective, and is not intended to be used as any commercial application. Specials thanks to Inshorts website for allowing me to access their data :) ### How to use Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html): ```python from transformers import pipeline print(summarizer("""Washington [US], October 31 (ANI): US President Joe Biden lost his temper with Volodymyr Zelenskyy in June during a phone conversation when he asked for more military aid, NBC News reported on Monday, citing sources familiar with the call. The report said Biden routinely calls Zelenskyy when the US announces new aid packages for Ukraine. But the June call was different. Biden had barely finished informing Zelenskyy that he had approved another USD 1 billion in military assistance for Ukraine when his counterpart started asking for extra help Kyiv needs but isn’t getting, the report said. Biden raised his voice, and as per the NBC report said Zelenskyy could “show a little more gratitude.” Prior to the June 15 phone call, Biden’s dissatisfaction with Zelenskyy had been building for weeks, the sources said. According to them, the US president and a number of his aides believed that Washington was doing everything possible and as quickly as possible, but Zelenskyy continued to publicly pay attention only to what was not being done. After Zelenskyy was rebuffed during the June call, Zelenskyy publicly delivered a video message thanking Biden for the assistance and defusing the tensions. “I had an important conversation with US President Biden today,” NBC quoted Ukraine’s president in videotaped remarks. “I am grateful for this support. It is especially important for our defence in Donbas.” The United States has been a leading provider of security assistance to Ukraine, particularly since the start of the Russia-Ukraine conflict on February 24. This report on the Biden-Zelenskyy phone call comes two days after Washington announced USD 275 million in additional military assistance for Ukraine. “This drawdown will bring the total US military assistance for Ukraine to an unprecedented level of more than USD 18.5 billion since the beginning of the Administration,” the US State Department said in a statement. The United States, in 2022, provided more advanced defence equipment to Ukraine, as well as greater amounts of previously provided equipment, according to a Congressional Research Service report. According to Pentagon, US security assistance committed to Ukraine, includes High Mobility Artillery Rocket Systems, Stinger anti-aircraft systems, Javelin anti-armour systems and Mi-17 helicopters. Ukrainian officials have sought to acquire other advanced systems, including fighter aircraft, anti-ship, and additional air defence and anti-missile capabilities. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content.""")) >>> [{'summary_text': "US President Joe Biden lost his temper with US President Volodymyr Zelenskyy during a phone conversation in June when he asked for more military aid, NBC News reported on Monday. Biden had barely finished informing him that he had approved another USD 1 billion in military assistance for Ukraine when his counterpart started asking for extra help Kyiv needs but isn't getting, the report said."}] ```
okho0653/distilbert-base-uncased-finetuned-cad-20pc
okho0653
2022-11-06T06:26:56Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T06:17:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-cad-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cad-20pc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0221 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | No log | 1.0 | 7 | 0.2262 | 1.0 | 1.0 | | No log | 2.0 | 14 | 0.0736 | 1.0 | 1.0 | | No log | 3.0 | 21 | 0.0358 | 1.0 | 1.0 | | No log | 4.0 | 28 | 0.0249 | 1.0 | 1.0 | | No log | 5.0 | 35 | 0.0221 | 1.0 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
okho0653/distilbert-base-uncased-finetuned-20pc
okho0653
2022-11-06T06:16:40Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T06:04:12Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-20pc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3326 - Accuracy: 0.8642 - F1: 0.4762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 41 | 0.4428 | 0.8333 | 0.0 | | No log | 2.0 | 82 | 0.4012 | 0.8333 | 0.0 | | No log | 3.0 | 123 | 0.3619 | 0.8333 | 0.1818 | | No log | 4.0 | 164 | 0.3488 | 0.8580 | 0.3784 | | No log | 5.0 | 205 | 0.3326 | 0.8642 | 0.4762 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/jdfromny206
huggingtweets
2022-11-06T05:44:41Z
108
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T05:31:26Z
--- language: en thumbnail: http://www.huggingtweets.com/jdfromny206/1667713430931/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1521632298273288193/svg4l6b7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">JDfromNY</div> <div style="text-align: center; font-size: 14px;">@jdfromny206</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from JDfromNY. | Data | JDfromNY | | --- | --- | | Tweets downloaded | 3228 | | Retweets | 107 | | Short tweets | 128 | | Tweets kept | 2993 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kxuv9gk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jdfromny206's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e7l89e5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e7l89e5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jdfromny206') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/alexabliss_wwe
huggingtweets
2022-11-06T05:06:07Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T04:18:55Z
--- language: en thumbnail: http://www.huggingtweets.com/alexabliss_wwe/1667711162135/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1271821102134833153/krgeswcX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lexi (Kaufman) Cabrera</div> <div style="text-align: center; font-size: 14px;">@alexabliss_wwe</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lexi (Kaufman) Cabrera. | Data | Lexi (Kaufman) Cabrera | | --- | --- | | Tweets downloaded | 3184 | | Retweets | 1160 | | Short tweets | 399 | | Tweets kept | 1625 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hgwztvb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alexabliss_wwe's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/alexabliss_wwe') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
okho0653/Bio_ClinicalBERT-finetuned-cad-20pc
okho0653
2022-11-06T02:59:42Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T02:45:20Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: Bio_ClinicalBERT-finetuned-cad-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-finetuned-cad-20pc This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0088 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | No log | 1.0 | 7 | 0.1109 | 1.0 | 1.0 | | No log | 2.0 | 14 | 0.0284 | 1.0 | 1.0 | | No log | 3.0 | 21 | 0.0142 | 1.0 | 1.0 | | No log | 4.0 | 28 | 0.0097 | 1.0 | 1.0 | | No log | 5.0 | 35 | 0.0088 | 1.0 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
uripper/GIANNIS
uripper
2022-11-06T02:34:15Z
5
0
diffusers
[ "diffusers", "unconditional-image-generation", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-11-01T10:20:02Z
--- tags: - unconditional-image-generation ---
okho0653/Bio_ClinicalBERT-finetuned-20pc
okho0653
2022-11-06T02:33:46Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-06T02:19:10Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: Bio_ClinicalBERT-finetuned-20pc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-finetuned-20pc This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3213 - Accuracy: 0.8580 - F1: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 41 | 1.0399 | 0.8642 | 0.45 | | No log | 2.0 | 82 | 1.1412 | 0.8519 | 0.4 | | No log | 3.0 | 123 | 1.2759 | 0.8642 | 0.45 | | No log | 4.0 | 164 | 1.2953 | 0.8519 | 0.5385 | | No log | 5.0 | 205 | 1.3213 | 0.8580 | 0.4390 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Robertuus/Crypto_Sentiment_Analysis_Bert
Robertuus
2022-11-06T02:00:41Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-05T14:41:48Z
--- license: unknown --- # Bert model fine-tuned to analyze the sentiment of messages, LABEL_0 is positive and LABEL_1 is negative.
sd-concepts-library/smurf-style
sd-concepts-library
2022-11-06T01:34:45Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-11-06T01:34:41Z
--- license: mit --- ### Smurf Style on Stable Diffusion This is the `<smurfy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<smurfy> 0](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/6.jpeg) ![<smurfy> 1](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/2.jpeg) ![<smurfy> 2](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/0.jpeg) ![<smurfy> 3](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/8.jpeg) ![<smurfy> 4](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/3.jpeg) ![<smurfy> 5](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/5.jpeg) ![<smurfy> 6](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/4.jpeg) ![<smurfy> 7](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/9.jpeg) ![<smurfy> 8](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/1.jpeg) ![<smurfy> 9](https://huggingface.co/sd-concepts-library/smurf-style/resolve/main/concept_images/7.jpeg)
huggingtweets/ibdwssbm-kodorinssb-tsm_leffen
huggingtweets
2022-11-06T01:12:43Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-06T01:10:36Z
--- language: en thumbnail: http://www.huggingtweets.com/ibdwssbm-kodorinssb-tsm_leffen/1667697159635/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1560338805445611521/SwRxF60m_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1499195152639926276/t4_WbYMx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1513270656196292608/t2voAbPh_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">TSM FTX Leffen & Panda | iBDW (Cody Schwab) & FLY | KoDoRiN</div> <div style="text-align: center; font-size: 14px;">@ibdwssbm-kodorinssb-tsm_leffen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from TSM FTX Leffen & Panda | iBDW (Cody Schwab) & FLY | KoDoRiN. | Data | TSM FTX Leffen | Panda | iBDW (Cody Schwab) | FLY | KoDoRiN | | --- | --- | --- | --- | | Tweets downloaded | 3244 | 3249 | 3048 | | Retweets | 301 | 493 | 479 | | Short tweets | 335 | 235 | 275 | | Tweets kept | 2608 | 2521 | 2294 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7pksc1xu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ibdwssbm-kodorinssb-tsm_leffen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19lbljqq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19lbljqq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ibdwssbm-kodorinssb-tsm_leffen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ryo-hsgw/xlm-roberta-base-finetuned-panx-en
ryo-hsgw
2022-11-05T23:46:25Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-05T23:43:18Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.6863181312569522 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3927 - F1: 0.6863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 | | 0.505 | 2.0 | 100 | 0.4627 | 0.6393 | | 0.3783 | 3.0 | 150 | 0.3927 | 0.6863 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
ryo-hsgw/xlm-roberta-base-finetuned-panx-it
ryo-hsgw
2022-11-05T23:43:08Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-05T23:39:48Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8224755700325732 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2521 - F1: 0.8225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8088 | 1.0 | 70 | 0.3423 | 0.7009 | | 0.2844 | 2.0 | 140 | 0.2551 | 0.8027 | | 0.1905 | 3.0 | 210 | 0.2521 | 0.8225 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
ryo-hsgw/xlm-roberta-base-finetuned-panx-fr
ryo-hsgw
2022-11-05T23:39:34Z
10
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-05T23:34:50Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8325761399966348 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2978 - F1: 0.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.574 | 1.0 | 191 | 0.3495 | 0.7889 | | 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 | | 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/sama-willmanidis
huggingtweets
2022-11-05T23:12:05Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T23:11:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sama-willmanidis/1667689920861/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/804990434455887872/BG0Xh7Oa_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1580635866334101504/K2OCKgAJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sam Altman & Will Manidis</div> <div style="text-align: center; font-size: 14px;">@sama-willmanidis</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sam Altman & Will Manidis. | Data | Sam Altman | Will Manidis | | --- | --- | --- | | Tweets downloaded | 3247 | 3244 | | Retweets | 389 | 62 | | Short tweets | 144 | 442 | | Tweets kept | 2714 | 2740 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2smlli7t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sama-willmanidis's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/285i3b4q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/285i3b4q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sama-willmanidis') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/aeronautblue
huggingtweets
2022-11-05T21:43:10Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T21:39:42Z
--- language: en thumbnail: http://www.huggingtweets.com/aeronautblue/1667684473479/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1515688111526891521/o_3LoG40_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">blue</div> <div style="text-align: center; font-size: 14px;">@aeronautblue</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from blue. | Data | blue | | --- | --- | | Tweets downloaded | 2373 | | Retweets | 460 | | Short tweets | 379 | | Tweets kept | 1534 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wsp7qa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aeronautblue's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/61928z1e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/61928z1e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/aeronautblue') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tatakof/ppo-LunarLander-v2
tatakof
2022-11-05T21:38:58Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-05T17:16:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.23 +/- 24.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CTAE4OK/Niki
CTAE4OK
2022-11-05T21:14:22Z
0
0
null
[ "region:us" ]
null
2022-11-05T21:09:35Z
from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained("DGSpitzer/Cyberpunk-Anime-Diffusion")
halflings/diabetes_detection_fixed3
halflings
2022-11-05T20:43:11Z
0
0
mlconsole
[ "mlconsole", "tabular-classification", "dataset:diabetes_detection", "license:unknown", "model-index", "region:us" ]
tabular-classification
2022-11-05T20:43:08Z
--- license: unknown inference: false tags: - mlconsole - tabular-classification library_name: mlconsole metrics: - accuracy - loss datasets: - diabetes_detection model-index: - name: diabetes_detection_fixed3 results: - task: type: tabular-classification name: tabular-classification dataset: type: diabetes_detection name: diabetes_detection metrics: - type: accuracy name: Accuracy value: 0.78125 - type: loss name: Model loss value: 0.523585319519043 --- # classification model trained on "diabetes_detection" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_fixed3) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
flamesbob/BrokenM_style
flamesbob
2022-11-05T20:35:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-03T16:45:01Z
--- license: creativeml-openrail-m --- `Broken mirror, shattered mirror, brokenM_style` this style gives a shattered mirror / reflection to prompts. License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
radeveljic99/ppo-LunarLander-v2
radeveljic99
2022-11-05T20:24:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-05T20:02:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 174.96 +/- 12.10 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Laxman/my-awesome-setfit-model
Laxman
2022-11-05T20:05:30Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-05T20:05:14Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 100 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 100, "warmup_steps": 10, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
OpenMatch/co-condenser-large-msmarco
OpenMatch
2022-11-05T20:02:56Z
9
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-28T00:11:04Z
--- license: mit --- This model has been pretrained on MS MARCO passages first, then fine-tuned on the MS MARCO training set following the approach described in the paper **Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval**. The model can be used to reproduce the experimental results associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
huggingtweets/_akhaliq-cyalm-iluminatibot
huggingtweets
2022-11-05T19:25:03Z
106
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T19:24:56Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1570915453534453763/sFncOvJP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/423106148279922688/anTfhXtr_400x400.jpeg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1451191636810092553/kpM5Fe12_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">illuminatibot & cyril almeida & AK</div> <div style="text-align: center; font-size: 14px;">@_akhaliq-cyalm-iluminatibot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from illuminatibot & cyril almeida & AK. | Data | illuminatibot | cyril almeida | AK | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 454 | 3246 | | Retweets | 0 | 9 | 1390 | | Short tweets | 1602 | 29 | 168 | | Tweets kept | 1648 | 416 | 1688 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1zfr5cxv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_akhaliq-cyalm-iluminatibot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38z7gf3g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38z7gf3g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_akhaliq-cyalm-iluminatibot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ballesteyoni/Woman
Ballesteyoni
2022-11-05T18:11:28Z
0
0
null
[ "region:us" ]
null
2022-11-05T18:09:52Z
Women dancing in a circle in menstrual blood in moon shadow with chamans
barbarabax/unicorns
barbarabax
2022-11-05T18:02:06Z
0
0
null
[ "region:us" ]
null
2022-11-05T15:44:14Z
Use unicornstyle in prompt ------ language: - "List of ISO 639-1 code for your language" - English tags: - ckpt - unicorn license: "openrail"
ocm/distilbert-base-uncased-finetuned-emotion
ocm
2022-11-05T17:45:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-29T11:15:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.935 - name: F1 type: f1 value: 0.9351083637430424 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1582 - Accuracy: 0.935 - F1: 0.9351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7703 | 1.0 | 250 | 0.2588 | 0.918 | 0.9165 | | 0.2031 | 2.0 | 500 | 0.1773 | 0.928 | 0.9282 | | 0.1385 | 3.0 | 750 | 0.1593 | 0.934 | 0.9342 | | 0.1101 | 4.0 | 1000 | 0.1582 | 0.935 | 0.9351 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
erose/wav2vec2-malayalam_english-3h
erose
2022-11-05T16:11:28Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "malayalam", "ml_en", "code-switching", "ml", "en", "dataset:erose/code_switching-ml-en", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-03T13:25:37Z
--- license: apache-2.0 description: wav2vec2 based model for malayalam-english code-switched speech language: - ml - en tags: - automatic-speech-recognition - malayalam - ml_en - code-switching datasets: - erose/code_switching-ml-en model-index: - name: wav2vec2 ml_en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: erose/code_switching-ml-en (test set) type: code_switching-ml-en args: ml_en metrics: - name: Test WER type: wer value: 58.93 - name: Test CER type: cer value: 19.45 ---
s-nlp/gpt2-base-gedi-detoxification
s-nlp
2022-11-05T16:05:17Z
30
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conditional-text-generation", "en", "arxiv:2109.08914", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text-generation - conditional-text-generation --- # Model Details This is a conditional language model based on [gpt2-medium](https://huggingface.co/gpt2-medium/) but with a vocabulary from [t5-base](https://huggingface.co/t5-base), for compatibility with T5-based paraphrasers such as [t5-paranmt-detox](https://huggingface.co/SkolkovoInstitute/t5-paranmt-detox). The model is conditional on two styles, `toxic` and `normal`, and was fine-tuned on the dataset from the Jigsaw [toxic comment classification challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge). The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) (Dale et al, 2021) that describes its possible usage in more detail. An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox. ## Model Description - **Developed by:** SkolkovoInstitute - **Model type:** Conditional Text Generation - **Language:** English - **Related Models:** - **Parent Model:** [gpt2-medium](https://huggingface.co/gpt2-medium/) - **Source of vocabulary:** [t5-base](https://huggingface.co/t5-base) - **Resources for more information:** - The paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) - Its repository https://github.com/skoltech-nlp/detox. # Uses The model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see [the paper](https://arxiv.org/abs/2109.08914) for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier. # Bias, Risks, and Limitations The model inherits all the risks of its parent model, [gpt2-medium](https://huggingface.co/gpt2-medium/). It also inherits all the biases of the [Jigsaw dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords. # Training Details See the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) and [the associated code](https://github.com/s-nlp/detox/tree/main/emnlp2021/style_transfer/paraGeDi). # Evaluation The model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see [the paper](https://arxiv.org/abs/2109.08914)). # Citation **BibTeX:** ``` @inproceedings{dale-etal-2021-text, title = "Text Detoxification using Large Pre-trained Neural Models", author = "Dale, David and Voronov, Anton and Dementieva, Daryna and Logacheva, Varvara and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.629", pages = "7979--7996", } ```
pepa/deberta-v3-base-fever
pepa
2022-11-05T15:03:56Z
7
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:copenlu/fever_gold_evidence", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-29T07:36:51Z
--- tags: - generated_from_trainer model-index: - name: deberta-v3-base-fever results: [] datasets: - copenlu/fever_gold_evidence --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base-fever This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5146 - eval_p: 0.8912 - eval_r: 0.8904 - eval_f1: 0.8897 - eval_runtime: 49.9875 - eval_samples_per_second: 376.194 - eval_steps_per_second: 47.032 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 4 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.12.1
pepa/deberta-v3-large-fever
pepa
2022-11-05T15:03:41Z
105
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:copenlu/fever_gold_evidence", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-01T20:22:41Z
--- tags: - generated_from_trainer model-index: - name: deberta-v3-large-fever results: [] datasets: - copenlu/fever_gold_evidence --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-fever This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5286 - eval_p: 0.8827 - eval_r: 0.8826 - eval_f1: 0.8816 - eval_runtime: 231.4062 - eval_samples_per_second: 81.264 - eval_steps_per_second: 10.16 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 4 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.12.1
pepa/deberta-v3-small-fever
pepa
2022-11-05T15:03:10Z
4
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:copenlu/fever_gold_evidence", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-29T07:39:36Z
--- tags: - generated_from_trainer model-index: - name: deberta-v3-small-fever results: [] datasets: - copenlu/fever_gold_evidence --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-small-fever This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4816 - eval_p: 0.8811 - eval_r: 0.8783 - eval_f1: 0.8780 - eval_runtime: 28.4486 - eval_samples_per_second: 661.017 - eval_steps_per_second: 82.64 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 4 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.12.1
pepa/bigbird-roberta-large-fever
pepa
2022-11-05T15:02:18Z
5
0
transformers
[ "transformers", "pytorch", "big_bird", "text-classification", "generated_from_trainer", "dataset:copenlu/fever_gold_evidence", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-02T07:55:05Z
--- tags: - generated_from_trainer model-index: - name: bigbird-roberta-large-fever results: [] datasets: - copenlu/fever_gold_evidence --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-roberta-large-fever This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4721 - eval_p: 0.8933 - eval_r: 0.8930 - eval_f1: 0.8926 - eval_runtime: 153.523 - eval_samples_per_second: 122.49 - eval_steps_per_second: 15.314 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 4 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.12.1
Marve271/BartConditionalGeneration-finetuned-insult
Marve271
2022-11-05T14:08:07Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-05T12:49:03Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: BartConditionalGeneration-finetuned-insult results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BartConditionalGeneration-finetuned-insult This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 338 | 4.9652 | | 5.5666 | 2.0 | 676 | 4.2736 | | 4.9076 | 3.0 | 1014 | 4.2014 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
AlanRobotics/bert_q_a_test
AlanRobotics
2022-11-05T13:51:56Z
61
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
question-answering
2022-11-05T12:18:36Z
--- tags: - generated_from_keras_callback model-index: - name: bert_q_a_test results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert_q_a_test This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/damienleevoice
huggingtweets
2022-11-05T13:39:35Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T13:34:14Z
--- language: en thumbnail: http://www.huggingtweets.com/damienleevoice/1667655559324/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1525084483036164097/z_XHCdw1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Damien Lee</div> <div style="text-align: center; font-size: 14px;">@damienleevoice</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Damien Lee. | Data | Damien Lee | | --- | --- | | Tweets downloaded | 1774 | | Retweets | 52 | | Short tweets | 315 | | Tweets kept | 1407 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2x5e6fes/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @damienleevoice's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f9sjksd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f9sjksd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/damienleevoice') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
komekami/distilbert-base-uncased-finetuned-emotion
komekami
2022-11-05T12:39:54Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-05T11:11:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.934 - name: F1 type: f1 value: 0.9341415823944494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1599 - Accuracy: 0.934 - F1: 0.9341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1887 | 1.0 | 250 | 0.1806 | 0.9295 | 0.9293 | | 0.1245 | 2.0 | 500 | 0.1599 | 0.934 | 0.9341 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pallavi176/bert-fine-tuned-cola
pallavi176
2022-11-05T11:55:11Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-05T11:33:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-fine-tuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5778590180299453 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8136 - Matthews Correlation: 0.5779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4785 | 1.0 | 1069 | 0.5265 | 0.4996 | | 0.3162 | 2.0 | 2138 | 0.6626 | 0.5701 | | 0.1779 | 3.0 | 3207 | 0.8136 | 0.5779 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
fadhilarkn/setfit-model
fadhilarkn
2022-11-05T10:25:17Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-05T10:25:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jonathang/dog_breed
jonathang
2022-11-05T10:16:42Z
0
0
fastai
[ "fastai", "region:us" ]
null
2022-11-02T03:00:36Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
OpenBioML/LibreFold_AF2_reproduction
OpenBioML
2022-11-05T08:56:37Z
0
0
null
[ "AlphaFold", "protein model", "license:cc-by-4.0", "region:us" ]
null
2022-10-20T17:22:18Z
--- tags: - AlphaFold - protein model license: cc-by-4.0 --- # LibreFold AF2 reproduction Text ## Intro Text ## Model description Text ## Intended uses & limitations Text ### How to use Text ### Limitations and bias Text ## Training data Text ### Collection process Text ## Training procedure ### Preprocessing Text ### BibTeX entry and citation info ```bibtex Text ```
Shunian/yelp_review_classification
Shunian
2022-11-05T07:21:17Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:yelp_review_full", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-05T06:38:54Z
--- tags: - generated_from_trainer datasets: - yelp_review_full metrics: - accuracy model-index: - name: yelp_review_classification results: - task: name: Text Classification type: text-classification dataset: name: yelp_review_full type: yelp_review_full config: yelp_review_full split: train args: yelp_review_full metrics: - name: Accuracy type: accuracy value: 0.6852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # yelp_review_classification This model was trained from scratch on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 0.8517 - Accuracy: 0.6852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:------:|:--------:|:---------------:| | 0.7149 | 1.0 | 40625 | 0.6889 | 0.7167 | | 0.6501 | 2.0 | 81250 | 0.6967 | 0.6979 | | 0.5547 | 3.0 | 121875 | 0.6915 | 0.7377 | | 0.5375 | 4.0 | 162500 | 0.6895 | 0.7611 | | 0.4386 | 5.0 | 203125 | 0.8517 | 0.6852 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
MarkGG/Romance-baseline
MarkGG
2022-11-05T05:16:39Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T03:22:25Z
--- license: mit tags: - generated_from_trainer model-index: - name: Romance-baseline results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Romance-baseline This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.5909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.94 | 15 | 10.7009 | | No log | 1.94 | 30 | 10.0799 | | No log | 2.94 | 45 | 9.6627 | | No log | 3.94 | 60 | 9.4619 | | No log | 4.94 | 75 | 9.2970 | | No log | 5.94 | 90 | 9.0919 | | No log | 6.94 | 105 | 8.9071 | | No log | 7.94 | 120 | 8.7240 | | No log | 8.94 | 135 | 8.5485 | | No log | 9.94 | 150 | 8.3952 | | No log | 10.94 | 165 | 8.2469 | | No log | 11.94 | 180 | 8.1193 | | No log | 12.94 | 195 | 7.9918 | | No log | 13.94 | 210 | 7.8662 | | No log | 14.94 | 225 | 7.7394 | | No log | 15.94 | 240 | 7.6219 | | No log | 16.94 | 255 | 7.5135 | | No log | 17.94 | 270 | 7.4110 | | No log | 18.94 | 285 | 7.3021 | | No log | 19.94 | 300 | 7.2021 | | No log | 20.94 | 315 | 7.1276 | | No log | 21.94 | 330 | 7.0278 | | No log | 22.94 | 345 | 6.9627 | | No log | 23.94 | 360 | 6.8806 | | No log | 24.94 | 375 | 6.8214 | | No log | 25.94 | 390 | 6.7725 | | No log | 26.94 | 405 | 6.7101 | | No log | 27.94 | 420 | 6.6792 | | No log | 28.94 | 435 | 6.6361 | | No log | 29.94 | 450 | 6.5950 | | No log | 30.94 | 465 | 6.5745 | | No log | 31.94 | 480 | 6.5469 | | No log | 32.94 | 495 | 6.5520 | | No log | 33.94 | 510 | 6.5121 | | No log | 34.94 | 525 | 6.5255 | | No log | 35.94 | 540 | 6.5179 | | No log | 36.94 | 555 | 6.5079 | | No log | 37.94 | 570 | 6.5138 | | No log | 38.94 | 585 | 6.5170 | | No log | 39.94 | 600 | 6.4807 | | No log | 40.94 | 615 | 6.5338 | | No log | 41.94 | 630 | 6.4960 | | No log | 42.94 | 645 | 6.5342 | | No log | 43.94 | 660 | 6.5119 | | No log | 44.94 | 675 | 6.5614 | | No log | 45.94 | 690 | 6.5235 | | No log | 46.94 | 705 | 6.5388 | | No log | 47.94 | 720 | 6.5574 | | No log | 48.94 | 735 | 6.5581 | | No log | 49.94 | 750 | 6.5909 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
nubby/anime_multi-artist
nubby
2022-11-05T03:57:07Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-01T19:38:58Z
--- license: creativeml-openrail-m --- Waifu-Diffusion-v1-3 based StableDiffusion model with Dreambooth training on images from 3 different anime style artists. Trained to 17,000 steps using 155 total training images. ## Usage Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions. Use ```"m_kgrartist"``` for kagura_tohru style, ```"m_ozdmartist"``` for ozadomi style, or ```"m_srartist"``` seero style in your prompt to invoke the style of the desired artist. ## Example images from ```"m_kgrartist"``` <table> <tr> <td><img src=https://i.imgur.com/SIA7g2C.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/UbBsvZo.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/kMv5MH9.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/BiYihYs.png width=100% height=100%/></td> </tr> </table> ## Example images from ```"m_ozdmartist"``` <table> <tr> <td><img src=https://i.imgur.com/t2UmHWa.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/LFrQsy6.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/DnHg1Kp.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/cXooD2r.png width=100% height=100%/></td> </tr> </table> ## Example images from ```"m_srartist"``` <table> <tr> <td><img src=https://i.imgur.com/0gsFN2H.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/aDJr8x6.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/AUafGCd.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/va246Yv.png width=100% height=100%/></td> </tr> </table> ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
MarkGG/Romance-cleaned-1
MarkGG
2022-11-05T03:10:38Z
105
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-26T03:35:43Z
--- license: mit tags: - generated_from_trainer model-index: - name: Romance-cleaned-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Romance-cleaned-1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.97 | 29 | 9.9497 | | No log | 1.97 | 58 | 9.1816 | | No log | 2.97 | 87 | 8.5947 | | No log | 3.97 | 116 | 8.2217 | | No log | 4.97 | 145 | 7.8354 | | No log | 5.97 | 174 | 7.5075 | | No log | 6.97 | 203 | 7.2112 | | No log | 7.97 | 232 | 6.9077 | | No log | 8.97 | 261 | 6.5994 | | No log | 9.97 | 290 | 6.3077 | | No log | 10.97 | 319 | 6.0416 | | No log | 11.97 | 348 | 5.8126 | | No log | 12.97 | 377 | 5.6197 | | No log | 13.97 | 406 | 5.4789 | | No log | 14.97 | 435 | 5.3665 | | No log | 15.97 | 464 | 5.2738 | | No log | 16.97 | 493 | 5.1942 | | No log | 17.97 | 522 | 5.1382 | | No log | 18.97 | 551 | 5.0784 | | No log | 19.97 | 580 | 5.0347 | | No log | 20.97 | 609 | 4.9873 | | No log | 21.97 | 638 | 4.9514 | | No log | 22.97 | 667 | 4.9112 | | No log | 23.97 | 696 | 4.8838 | | No log | 24.97 | 725 | 4.8468 | | No log | 25.97 | 754 | 4.8221 | | No log | 26.97 | 783 | 4.7996 | | No log | 27.97 | 812 | 4.7815 | | No log | 28.97 | 841 | 4.7606 | | No log | 29.97 | 870 | 4.7394 | | No log | 30.97 | 899 | 4.7167 | | No log | 31.97 | 928 | 4.7140 | | No log | 32.97 | 957 | 4.6910 | | No log | 33.97 | 986 | 4.6844 | | No log | 34.97 | 1015 | 4.6765 | | No log | 35.97 | 1044 | 4.6687 | | No log | 36.97 | 1073 | 4.6721 | | No log | 37.97 | 1102 | 4.6724 | | No log | 38.97 | 1131 | 4.6629 | | No log | 39.97 | 1160 | 4.6772 | | No log | 40.97 | 1189 | 4.6795 | | No log | 41.97 | 1218 | 4.6788 | | No log | 42.97 | 1247 | 4.6832 | | No log | 43.97 | 1276 | 4.6954 | | No log | 44.97 | 1305 | 4.7009 | | No log | 45.97 | 1334 | 4.7082 | | No log | 46.97 | 1363 | 4.7140 | | No log | 47.97 | 1392 | 4.7158 | | No log | 48.97 | 1421 | 4.7181 | | No log | 49.97 | 1450 | 4.7175 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/transgirltoking
huggingtweets
2022-11-05T02:57:28Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T02:56:05Z
--- language: en thumbnail: http://www.huggingtweets.com/transgirltoking/1667617044734/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1587630117890949121/Uo9ukfaP_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">fallmoder</div> <div style="text-align: center; font-size: 14px;">@transgirltoking</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from fallmoder. | Data | fallmoder | | --- | --- | | Tweets downloaded | 950 | | Retweets | 280 | | Short tweets | 97 | | Tweets kept | 573 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/279zhs1a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @transgirltoking's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ipbrk4ae) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ipbrk4ae/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/transgirltoking') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/00daniponie
huggingtweets
2022-11-05T01:51:32Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T01:09:38Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1495719135858233345/0T3aMUoa_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dani little ponie 🏳️‍⚧️🐀</div> <div style="text-align: center; font-size: 14px;">@00daniponie</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dani little ponie 🏳️‍⚧️🐀. | Data | dani little ponie 🏳️‍⚧️🐀 | | --- | --- | | Tweets downloaded | 3227 | | Retweets | 1904 | | Short tweets | 56 | | Tweets kept | 1267 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cbrld7j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @00daniponie's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39w151kw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39w151kw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/00daniponie') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hazrulakmal/distilgpt2-ecb-finetuned
hazrulakmal
2022-11-05T01:25:33Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-03T19:14:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-ecb-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-ecb-finetuned This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.9655 | 1.0 | 17714 | 0.9472 | | 0.9121 | 2.0 | 35428 | 0.8986 | | 0.8682 | 3.0 | 53142 | 0.8705 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Nanohana/efficietnet-lstm-image-captioning
Nanohana
2022-11-05T00:28:47Z
0
0
null
[ "region:us" ]
null
2022-11-04T22:51:32Z
--- title: {{image-captioning}} sdk: {{gradio}} app_file: app.py --- # image-captioning This repository contains an image captioning system that is composed of: - Pretrained EfficientNet-B0 in ImageNet - Word Embedding with Flickr8k vocabulary - 1 layer LSTM It was trained for 100 epoches (CNN weights were frozen) and the vocabulary was built with words that appear at least 5 times in the Flickr8k dataset. ![image](https://user-images.githubusercontent.com/56324869/198848257-d981dd83-d362-491a-bbf0-f7ec305798ee.png)
huggingtweets/hellgirl2004
huggingtweets
2022-11-05T00:11:47Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-05T00:11:39Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1581781821414686722/lvOpNTQf_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🎃 rei 💀</div> <div style="text-align: center; font-size: 14px;">@hellgirl2004</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🎃 rei 💀. | Data | 🎃 rei 💀 | | --- | --- | | Tweets downloaded | 3168 | | Retweets | 1517 | | Short tweets | 584 | | Tweets kept | 1067 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m0ohu4nr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hellgirl2004's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mcqxcff) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mcqxcff/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hellgirl2004') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k
mpjan
2022-11-05T00:08:25Z
8
4
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "pt", "dataset:unicamp-dl/mmarco", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-05T00:03:16Z
--- pipeline_tag: sentence-similarity language: - 'pt' tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - 'unicamp-dl/mmarco' --- # mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a fine-tuning of [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) on the first 300k triplets of the Portuguese subset in [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco). <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k') model = AutoModel.from_pretrained('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 18750 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 9375, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jinhybr/OCR-Donut-CORD
jinhybr
2022-11-05T00:07:44Z
1,087
199
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "donut", "image-to-text", "vision", "arxiv:2111.15664", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2022-11-04T13:22:17Z
--- license: mit tags: - donut - image-to-text - vision --- # Donut (base-sized model, fine-tuned on CORD) Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on CORD, a document parsing dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ## CORD Dataset CORD: A Consolidated Receipt Dataset for Post-OCR Parsing. ![cord](https://github.com/clovaai/cord/blob/master/figure/sample.png?raw=true)
username23231/_
username23231
2022-11-05T00:07:18Z
0
2
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-11-05T00:07:18Z
--- license: bigscience-bloom-rail-1.0 ---
lilouuch/mbert2mbert-arabic-text-summarization-finetuned-xsum_arabic_abstractive_final_finaln
lilouuch
2022-11-04T23:46:53Z
16
1
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-04T19:53:43Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: mbert2mbert-arabic-text-summarization-finetuned-xsum_arabic_abstractive_final_finaln results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert2mbert-arabic-text-summarization-finetuned-xsum_arabic_abstractive_final_finaln This model is a fine-tuned version of [malmarjeh/mbert2mbert-arabic-text-summarization](https://huggingface.co/malmarjeh/mbert2mbert-arabic-text-summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2826 - Rouge1: 0.0119 - Rouge2: 0.0 - Rougel: 0.0119 - Rougelsum: 0.0119 - Gen Len: 41.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.5104 | 1.0 | 7915 | 2.3684 | 0.0 | 0.0 | 0.0 | 0.0 | 41.8314 | | 2.2222 | 2.0 | 15830 | 2.2826 | 0.0119 | 0.0 | 0.0119 | 0.0119 | 41.8856 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1