modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Declan/WallStreetJournal_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: other --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |hassakuHentaiModel_v12.safetensors |https://civitai.com/models/2583?modelVersionId=62528 |https://civitai.com/api/download/models/62528 | |hassakuHentaiModel_v11.safetensors |https://civitai.com/models/2583?modelVersionId=37521 |https://civitai.com/api/download/models/37521 | |hassakuHentaiModel_v10m.safetensors |https://civitai.com/models/2583?modelVersionId=32763 |https://civitai.com/api/download/models/32763 | |hassakuHentaiModel_v10.safetensors |https://civitai.com/models/2583?modelVersionId=29539 |https://civitai.com/api/download/models/29539 |
Declan/test_push
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
DeepChem/ChemBERTa-77M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,416
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: es_fi_all_copy_quy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # es_fi_all_copy_quy This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-fi](https://huggingface.co/Helsinki-NLP/opus-mt-es-fi) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4826 - Bleu: 1.0774 - Chrf: 27.8374 - Gen Len: 49.2425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:| | 0.5479 | 0.03 | 1000 | 0.6737 | 0.3055 | 13.5765 | 61.5895 | | 0.4856 | 0.07 | 2000 | 0.6330 | 0.1975 | 16.9698 | 73.9869 | | 0.4541 | 0.1 | 3000 | 0.6120 | 0.1727 | 17.4235 | 77.335 | | 0.4381 | 0.13 | 4000 | 0.5962 | 0.3781 | 18.0872 | 48.1066 | | 0.4236 | 0.17 | 5000 | 0.5880 | 0.3741 | 18.808 | 60.0463 | | 0.4165 | 0.2 | 6000 | 0.5721 | 0.4366 | 19.5753 | 54.2746 | | 0.3978 | 0.23 | 7000 | 0.5669 | 0.3667 | 20.4865 | 57.9366 | | 0.3916 | 0.27 | 8000 | 0.5536 | 0.8014 | 21.485 | 47.503 | | 0.3747 | 0.3 | 9000 | 0.5501 | 0.5344 | 21.0513 | 60.3924 | | 0.3825 | 0.33 | 10000 | 0.5375 | 0.4356 | 22.2798 | 56.7726 | | 0.3716 | 0.36 | 11000 | 0.5390 | 0.5945 | 23.2025 | 53.9366 | | 0.3518 | 0.4 | 12000 | 0.5312 | 0.7343 | 23.1143 | 52.326 | | 0.3495 | 0.43 | 13000 | 0.5217 | 1.0112 | 24.6907 | 49.3903 | | 0.3508 | 0.46 | 14000 | 0.5222 | 0.5915 | 23.556 | 57.2656 | | 0.3464 | 0.5 | 15000 | 0.5165 | 0.7044 | 24.2944 | 59.159 | | 0.3476 | 0.53 | 16000 | 0.5142 | 1.1 | 25.0408 | 52.1127 | | 0.3444 | 0.56 | 17000 | 0.5108 | 0.9361 | 24.8808 | 48.9366 | | 0.3404 | 0.6 | 18000 | 0.5056 | 0.9007 | 25.3524 | 51.6962 | | 0.3286 | 0.63 | 19000 | 0.5029 | 0.9896 | 24.8249 | 52.7233 | | 0.3362 | 0.66 | 20000 | 0.5012 | 0.8774 | 26.6471 | 47.3732 | | 0.3386 | 0.7 | 21000 | 0.5028 | 0.8963 | 25.7596 | 54.1861 | | 0.3371 | 0.73 | 22000 | 0.5002 | 0.8477 | 24.9775 | 54.2958 | | 0.3284 | 0.76 | 23000 | 0.4955 | 0.9197 | 25.5964 | 55.2606 | | 0.3239 | 0.8 | 24000 | 0.4943 | 1.1353 | 27.5912 | 48.5714 | | 0.3214 | 0.83 | 25000 | 0.4889 | 0.9846 | 26.098 | 51.0221 | | 0.3183 | 0.86 | 26000 | 0.4889 | 1.1816 | 28.4223 | 44.1288 | | 0.3164 | 0.89 | 27000 | 0.4874 | 1.028 | 27.4202 | 48.2616 | | 0.3162 | 0.93 | 28000 | 0.4839 | 1.0008 | 26.1891 | 54.4095 | | 0.3167 | 0.96 | 29000 | 0.4837 | 1.1952 | 27.7299 | 47.3159 | | 0.3141 | 0.99 | 30000 | 0.4809 | 0.7488 | 27.6076 | 48.9638 | | 0.3043 | 1.03 | 31000 | 0.4826 | 1.0774 | 27.8374 | 49.2425 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
DeepChem/SmilesTokenizer_PubChem_1M
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- language: - eo license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_13_0 - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-common_voice_13_0-eo-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice_13_0-eo-1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_13_0 - EO dataset. It achieves the following results on the evaluation set: - Loss: inf - Cer: 0.0735 - Wer: 0.3211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | No log | 0.08 | 400 | inf | 0.1200 | 0.5630 | | 3.8958 | 0.16 | 800 | inf | 0.0572 | 0.2620 | | 0.2086 | 0.24 | 1200 | inf | 0.0443 | 0.1990 | | 0.1588 | 0.32 | 1600 | inf | 0.0453 | 0.1967 | | 0.1413 | 0.4 | 2000 | inf | 0.0397 | 0.1851 | | 0.1413 | 0.48 | 2400 | inf | 0.0378 | 0.1647 | | 0.1306 | 0.56 | 2800 | inf | 0.0381 | 0.1649 | | 0.1351 | 0.64 | 3200 | inf | 0.0733 | 0.3815 | | 0.2618 | 0.73 | 3600 | inf | 0.0668 | 0.3090 | | 0.2627 | 0.81 | 4000 | inf | 0.0628 | 0.2713 | | 0.2627 | 0.89 | 4400 | inf | 0.0746 | 0.3186 | | 0.6158 | 0.97 | 4800 | inf | 0.0953 | 0.3936 | | 0.7826 | 1.05 | 5200 | inf | 0.0629 | 0.2717 | | 0.4931 | 1.13 | 5600 | inf | 0.0692 | 0.3018 | | 0.4322 | 1.21 | 6000 | inf | 0.0756 | 0.3190 | | 0.4322 | 1.29 | 6400 | inf | 0.0784 | 0.3273 | | 0.468 | 1.37 | 6800 | inf | 0.0753 | 0.3183 | | 0.3918 | 1.45 | 7200 | inf | 0.0739 | 0.3146 | | 0.3609 | 1.53 | 7600 | inf | 0.0720 | 0.3090 | | 0.3524 | 1.61 | 8000 | inf | 0.0714 | 0.3062 | | 0.3524 | 1.69 | 8400 | inf | 0.0721 | 0.3088 | | 0.3597 | 1.77 | 8800 | inf | 0.0728 | 0.3157 | | 0.363 | 1.85 | 9200 | inf | 0.0737 | 0.3196 | | 0.3774 | 1.93 | 9600 | inf | 0.0736 | 0.3219 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
2023-05-10T03:29:08Z
--- license: afl-3.0 metrics: - accuracy library_name: keras tags: - code --- - GitHub Repo: [keras_Realtime_Multi-Person_Pose_Estimation](https://github.com/michalfaber/keras_Realtime_Multi-Person_Pose_Estimation) - Mirror This model is a mirror from [model.h5](https://www.dropbox.com/s/llpxd14is7gyj0z/model.h5).
DeepPavlov/bert-base-multilingual-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
140
null
--- license: bigscience-bloom-rail-1.0 tags: - generated_from_trainer model-index: - name: bloom-560m-finetuned-Instruct-DB-v results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloom-560m-finetuned-Instruct-DB-v This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
DeepPavlov/marianmt-tatoeba-ruen
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
null
Access to model jowikroon/Henk is restricted and you are not in the authorized list. Visit https://huggingface.co/jowikroon/Henk to ask for access.
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
null
--- license: apache-2.0 --- ```` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RootYuan/opt-350m-alpaca") model = AutoModelForCausalLM.from_pretrained("RootYuan/opt-350m-alpaca") ```` usage: ```` instruction = "Classify the following into animals, plants, and minerals" input = "Oak tree, copper ore, elephant" prompts_no_input = f"### Instruction:\n{instruction}\n\n### Response:" prompts_with_input = f"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" prompts = prompts_no_input if input is None else prompts_with_input inputs = tokenizer.encode(prompts, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=64) ans = tokenizer.decode(outputs[0]).strip('</s>')[len(prompts):] if input is None: print(f"Human: {instruction}") else: print(f"Human: {instruction}\nInput: {input}") print(f"Assistant: {ans}") ```` outputs: ```` Human: Classify the following into animals, plants, and minerals Input: Oak tree, copper ore, elephant Assistant: Oak tree: Plant Copper ore: Mineral Elephant: Animal ```` Other Examples: **Human:** Say 'Hi' **Assistant:** Hi! How can I help you today? **Human:** Give me three helpful tips about health **Assistant:** 1. Eat a balanced diet. 2. Get enough sleep. 3. Exercise regularly. **Human:** Generate a list of ten items a person might need for a camping trip **Assistant:** 1. Tent 2. Sleeping bag 3. Flashlight 4. Matches 5. First Aid Kit 6. Matches 7. Flashlight 8. Matches 9. Water Bottle 10. Matche **Human:** Evaluate this sentence for spelling and grammar mistakes **Input:** He finnished his meal and left the resturant **Assistant:** He finished his meal and left the restaurant. **Human:** Solve the following problem in python. **Input:** Given variable a and variable b, return then sum of the a and b. **Assistant:** sum = a + b print(sum) **Human:** Classify the following into animals, plants, and minerals **Input:** Oak tree, copper ore, elephant **Assistant:** Oak tree: Plant Copper ore: Mineral Elephant: Animal **Human:** Do you know TikTok? **Assistant:** Yes, TikTok is an online video platform that is popular with users around the world. It is a platform that allows users to create and share short videos, and it has over 1 billion users worldwide. **Human:** Discuss the causes of the Great Depression **Assistant:** The Great Depression was caused by a combination of factors, including a lack of economic growth, a decrease in consumer spending, and a decrease in the availability of credit. These factors combined to create a severe economic downturn that lasted for more than two decades.
DeltaHub/adapter_t5-3b_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks sheep tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - jacksonchen/dreambooth_lora_xiyangyang1 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks sheep using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True.
DeltaHub/lora_t5-base_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- inference: true language: - en tags: - stable-diffusion - text-to-image pipeline_tag: text-to-image --- This is a Stable Diffusion model and collection of embeddings which were trained off of Bluey characters, and can generate semi-consistent and accurate results. While nowhere near perfect, its a start, and has certainly been fun being able to work on. # It is absolutely essential you *use both the checkpoint AND embeddings.* The way I trained it is a little unconventional, and you will struggle to get decent results without all the components. The base understanding of character shapes and style was learned in the checkpoint. The individual character colors, patterns, proportions, and extra refinement was applied through embeddings. ## Examples of generated results: ![ReferenceImage](RefImages/1.png) ![ReferenceImage](RefImages/2.png) ![ReferenceImage](RefImages/3.png) (note that these images were cherry-picked, don't be shocked to see some cursed abominations) # Example negative prompt: ``` weirdly colored eyes, weird eyelids, bad art, ((inaccurate)), (missing pupils), weird pupils, inaccurate eyes, weird eyes, bad proportions, more than two ears, (less than two ears), blank eyes, missing hands, weird, inaccurate pupils, colored eyes, (missing tail), ugly, weird scaling, deformed, disfigured, stretched, tiled, unofficial, unofficial artwork, poorly drawn, blurry, (pixelated), pink, purple, letterbox, border, image artifacts, compressed ``` # limitations: - Everything was trained from 512x512 images, higher resolution generations will have issues - It does not know what the characters look like from behind, and only generate images of them looking forward, left, or right. - It struggles to get more dynamic poses, it does good at standing, mediocre with sitting, but laying down and others it struggles. - Often gets confused with eyes and eyelids - Bingo embedding has less training, and often confuses her for Bluey. - Difficult getting multiple different characters in a single image consistently without oddities. # **(Disclaimer)** *I am not affiliated with Ludo and do not own the rights to Bluey or its content, this was trained off of images from Bluey of which I do not own, and produces images similar to Bluey. This was made purely from an educational, research, and general fun perspective. I have the highest level of respect for Ludo and their intellectual property, and do not in any way take credit for their work.*
Denilson/gbert-base-germaner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-sarcasm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sarcasm This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3075 - Matthews Correlation: 0.4109 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 57 | 0.7322 | 0.0 | | No log | 2.0 | 114 | 0.6734 | 0.1752 | | No log | 3.0 | 171 | 0.6436 | 0.3228 | | No log | 4.0 | 228 | 0.7826 | 0.2778 | | No log | 5.0 | 285 | 1.0203 | 0.2707 | | No log | 6.0 | 342 | 1.0190 | 0.3356 | | No log | 7.0 | 399 | 1.1675 | 0.3177 | | No log | 8.0 | 456 | 1.5206 | 0.2514 | | 0.3597 | 9.0 | 513 | 1.5789 | 0.4097 | | 0.3597 | 10.0 | 570 | 1.5752 | 0.3740 | | 0.3597 | 11.0 | 627 | 1.9003 | 0.3506 | | 0.3597 | 12.0 | 684 | 1.9354 | 0.3855 | | 0.3597 | 13.0 | 741 | 1.9770 | 0.3289 | | 0.3597 | 14.0 | 798 | 1.9802 | 0.3583 | | 0.3597 | 15.0 | 855 | 2.1322 | 0.3255 | | 0.3597 | 16.0 | 912 | 2.1541 | 0.2994 | | 0.3597 | 17.0 | 969 | 2.2047 | 0.2992 | | 0.0329 | 18.0 | 1026 | 2.0794 | 0.3466 | | 0.0329 | 19.0 | 1083 | 2.0705 | 0.3012 | | 0.0329 | 20.0 | 1140 | 2.0158 | 0.3759 | | 0.0329 | 21.0 | 1197 | 2.3999 | 0.3151 | | 0.0329 | 22.0 | 1254 | 2.1017 | 0.3917 | | 0.0329 | 23.0 | 1311 | 2.3275 | 0.3255 | | 0.0329 | 24.0 | 1368 | 2.2258 | 0.3386 | | 0.0329 | 25.0 | 1425 | 2.3628 | 0.3406 | | 0.0329 | 26.0 | 1482 | 2.4197 | 0.3077 | | 0.0145 | 27.0 | 1539 | 2.2661 | 0.3759 | | 0.0145 | 28.0 | 1596 | 2.4074 | 0.3077 | | 0.0145 | 29.0 | 1653 | 2.3326 | 0.3255 | | 0.0145 | 30.0 | 1710 | 2.2813 | 0.3740 | | 0.0145 | 31.0 | 1767 | 2.3242 | 0.3181 | | 0.0145 | 32.0 | 1824 | 2.5039 | 0.2930 | | 0.0145 | 33.0 | 1881 | 2.6045 | 0.3151 | | 0.0145 | 34.0 | 1938 | 2.3075 | 0.4109 | | 0.0145 | 35.0 | 1995 | 2.3572 | 0.3759 | | 0.0129 | 36.0 | 2052 | 2.3833 | 0.3759 | | 0.0129 | 37.0 | 2109 | 2.6260 | 0.3009 | | 0.0129 | 38.0 | 2166 | 2.6132 | 0.3289 | | 0.0129 | 39.0 | 2223 | 2.4151 | 0.3989 | | 0.0129 | 40.0 | 2280 | 2.5695 | 0.3360 | | 0.0129 | 41.0 | 2337 | 2.3902 | 0.3989 | | 0.0129 | 42.0 | 2394 | 2.4388 | 0.3759 | | 0.0129 | 43.0 | 2451 | 2.6323 | 0.3289 | | 0.0065 | 44.0 | 2508 | 2.6131 | 0.3553 | | 0.0065 | 45.0 | 2565 | 2.4426 | 0.3958 | | 0.0065 | 46.0 | 2622 | 2.4481 | 0.3958 | | 0.0065 | 47.0 | 2679 | 2.4440 | 0.3958 | | 0.0065 | 48.0 | 2736 | 2.4689 | 0.3784 | | 0.0065 | 49.0 | 2793 | 2.4725 | 0.3784 | | 0.0065 | 50.0 | 2850 | 2.4718 | 0.3784 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Deniskin/emailer_medium_300
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
# SoftVC VITS Singing Voice Conversion In the field of Singing Voice Conversion, there is not only one project, SoVitsSvc, but also many other projects, which will not be listed here. The project was officially discontinued for maintenance and Archived. However, there are still other enthusiasts who have created their own branches and continue to maintain the SoVitsSvc project (still unrelated to SvcDevelopTeam and the repository maintainers) and have made some big changes to it for you to find out for yourself. #### ✨ A fork with a greatly improved interface: [34j/so-vits-svc-fork](https://github.com/34j/so-vits-svc-fork) #### ✨ A client supports real-time conversion: [w-okada/voice-changer](https://github.com/w-okada/voice-changer) #### This project is fundamentally different from Vits. Vits is TTS and this project is SVC. TTS cannot be carried out in this project, and Vits cannot carry out SVC, and the two project models are not universal ## Disclaimer This project is an open source, offline project, and all members of SvcDevelopTeam and all developers and maintainers of this project (hereinafter referred to as contributors) have no control over this project. The contributor of this project has never provided any organization or individual with any form of assistance, including but not limited to data set extraction, data set processing, computing support, training support, infering, etc. Contributors to the project do not and cannot know what users are using the project for. Therefore, all AI models and synthesized audio based on the training of this project have nothing to do with the contributors of this project. All problems arising therefrom shall be borne by the user. This project is run completely offline and cannot collect any user information or obtain user input data. Therefore, contributors to this project are not aware of all user input and models and therefore are not responsible for any user input. This project is only a framework project, which does not have the function of speech synthesis itself, and all the functions require the user to train the model themselves. Meanwhile, there is no model attached to this project, and any secondary distributed project has nothing to do with the contributors of this project ## 📏 Terms of Use # Warning: Please solve the authorization problem of the dataset on your own. You shall be solely responsible for any problems caused by the use of non-authorized datasets for training and all consequences thereof.The repository and its maintainer, svc develop team, have nothing to do with the consequences! 1. This project is established for academic exchange purposes only and is intended for communication and learning purposes. It is not intended for production environments. 2. Any videos based on sovits that are published on video platforms must clearly indicate in the description that they are used for voice changing and specify the input source of the voice or audio, for example, using videos or audios published by others and separating the vocals as input source for conversion, which must provide clear original video or music links. If your own voice or other synthesized voices from other commercial vocal synthesis software are used as the input source for conversion, you must also explain it in the description. 3. You shall be solely responsible for any infringement problems caused by the input source. When using other commercial vocal synthesis software as input source, please ensure that you comply with the terms of use of the software. Note that many vocal synthesis engines clearly state in their terms of use that they cannot be used for input source conversion. 4. It is forbidden to use the project to engage in illegal activities, religious and political activities. The project developers firmly resist the above activities. If they do not agree with this article, the use of the project is prohibited. 5. Continuing to use this project is deemed as agreeing to the relevant provisions stated in this repository README. This repository README has the obligation to persuade, and is not responsible for any subsequent problems that may arise. 6. If you use this project for any other plan, please contact and inform the author of this repository in advance. Thank you very much. ## 🆕 Update! > Updated the 4.0-v2 model, the entire process is the same as 4.0. Compared to 4.0, there is some improvement in certain scenarios, but there are also some cases where it has regressed. Please refer to the [4.0-v2 branch](https://github.com/svc-develop-team/so-vits-svc/tree/4.0-v2) for more information. ## 📝 4.0 Feature list of branches | Branch | Feature | whether compatible with the main branch model | | :-------------: | :----------: | :------------: | | 4.0 | main branch | - | | 4.0v2 | The VISinger2 model is used | incompatibility | | 4.0-Vec768-Layer12 | The feature input is the Layer 12 Transformer output of the Content Vec | incompatibility | ## 📝 Model Introduction The singing voice conversion model uses SoftVC content encoder to extract source audio speech features, then the vectors are directly fed into VITS instead of converting to a text based intermediate; thus the pitch and intonations are conserved. Additionally, the vocoder is changed to [NSF HiFiGAN](https://github.com/openvpi/DiffSinger/tree/refactor/modules/nsf_hifigan) to solve the problem of sound interruption. ### 🆕 4.0 Version Update Content - Feature input is changed to [Content Vec](https://github.com/auspicious3000/contentvec) - The sampling rate is unified to use 44100Hz - Due to the change of hop size and other parameters, as well as the streamlining of some model structures, the required GPU memory for inference is **significantly reduced**. The 44kHz GPU memory usage of version 4.0 is even smaller than the 32kHz usage of version 3.0. - Some code structures have been adjusted - The dataset creation and training process are consistent with version 3.0, but the model is completely non-universal, and the data set needs to be fully pre-processed again. - Added an option 1: automatic pitch prediction for vc mode, which means that you don't need to manually enter the pitch key when converting speech, and the pitch of male and female voices can be automatically converted. However, this mode will cause pitch shift when converting songs. - Added option 2: reduce timbre leakage through k-means clustering scheme, making the timbre more similar to the target timbre. - Added option 3: Added [NSF-HIFIGAN Enhancer](https://github.com/yxlllc/DDSP-SVC), which has certain sound quality enhancement effect on some models with few train-sets, but has negative effect on well-trained models, so it is closed by default ## 💬 About Python Version After conducting tests, we believe that the project runs stably on `Python 3.8.9`. ## 📥 Pre-trained Model Files #### **Required** - ContentVec: [checkpoint_best_legacy_500.pt](https://ibm.box.com/s/z1wgl1stco8ffooyatzdwsqn2psd9lrr) - Place it under the `hubert` directory ```shell # contentvec wget -P hubert/ http://obs.cstcloud.cn/share/obs/sankagenkeshi/checkpoint_best_legacy_500.pt # Alternatively, you can manually download and place it in the hubert directory ``` #### **Optional(Strongly recommend)** - Pre-trained model files: `G_0.pth` `D_0.pth` - Place them under the `logs/44k` directory Get them from svc-develop-team(TBD) or anywhere else. Although the pretrained model generally does not cause any copyright problems, please pay attention to it. For example, ask the author in advance, or the author has indicated the feasible use in the description clearly. #### **Optional(Select as Required)** If you are using the NSF-HIFIGAN enhancer, you will need to download the pre-trained NSF-HIFIGAN model, or not if you do not need it. - Pre-trained NSF-HIFIGAN Vocoder: [nsf_hifigan_20221211.zip](https://github.com/openvpi/vocoders/releases/download/nsf-hifigan-v1/nsf_hifigan_20221211.zip) - Unzip and place the four files under the `pretrain/nsf_hifigan` directory ```shell # nsf_hifigan https://github.com/openvpi/vocoders/releases/download/nsf-hifigan-v1/nsf_hifigan_20221211.zip # Alternatively, you can manually download and place it in the pretrain/nsf_hifigan directory # URL:https://github.com/openvpi/vocoders/releases/tag/nsf-hifigan-v1 ``` ## 📊 Dataset Preparation Simply place the dataset in the `dataset_raw` directory with the following file structure. ``` dataset_raw ├───speaker0 │ ├───xxx1-xxx1.wav │ ├───... │ └───Lxx-0xx8.wav └───speaker1 ├───xx2-0xxx2.wav ├───... └───xxx7-xxx007.wav ``` You can customize the speaker name. ``` dataset_raw └───suijiSUI ├───1.wav ├───... └───25788785-20221210-200143-856_01_(Vocals)_0_0.wav ``` ## 🛠️ Preprocessing ### 0. Slice audio Slice to `5s - 15s`, a bit longer is no problem. Too long may lead to `torch.cuda.OutOfMemoryError` during training or even pre-processing. By using [audio-slicer-GUI](https://github.com/flutydeer/audio-slicer) or [audio-slicer-CLI](https://github.com/openvpi/audio-slicer) In general, only the `Minimum Interval` needs to be adjusted. For statement audio it usually remains default. For singing audio it can be adjusted to `100` or even `50`. After slicing, delete audio that is too long and too short. ### 1. Resample to 44100Hz and mono ```shell python resample.py ``` ### 2. Automatically split the dataset into training and validation sets, and generate configuration files. ```shell python preprocess_flist_config.py ``` ### 3. Generate hubert and f0 ```shell python preprocess_hubert_f0.py ``` After completing the above steps, the dataset directory will contain the preprocessed data, and the dataset_raw folder can be deleted. #### You can modify some parameters in the generated config.json * `keep_ckpts`: Keep the last `keep_ckpts` models during training. Set to `0` will keep them all. Default is `3`. * `all_in_mem`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is **much larger** than your dataset. ## 🏋️‍♀️ Training ```shell python train.py -c configs/config.json -m 44k ``` ## 🤖 Inference Use [inference_main.py](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/inference_main.py) ```shell # Example python inference_main.py -m "logs/44k/G_30400.pth" -c "configs/config.json" -s "nen" -n "君の知らない物語-src.wav" -t 0 ``` Required parameters: - `-m` | `--model_path`: Path to the model. - `-c` | `--config_path`: Path to the configuration file. - `-s` | `--spk_list`: Target speaker name for conversion. - `-n` | `--clean_names`: A list of wav file names located in the raw folder. - `-t` | `--trans`: Pitch adjustment, supports positive and negative (semitone) values. Optional parameters: see the next section - `-a` | `--auto_predict_f0`: Automatic pitch prediction for voice conversion. Do not enable this when converting songs as it can cause serious pitch issues. - `-cl` | `--clip`: Voice forced slicing. Set to 0 to turn off(default), duration in seconds. - `-lg` | `--linear_gradient`: The cross fade length of two audio slices in seconds. If there is a discontinuous voice after forced slicing, you can adjust this value. Otherwise, it is recommended to use. Default 0. - `-cm` | `--cluster_model_path`: Path to the clustering model. Fill in any value if clustering is not trained. - `-cr` | `--cluster_infer_ratio`: Proportion of the clustering solution, range 0-1. Fill in 0 if the clustering model is not trained. - `-fmp` | `--f0_mean_pooling`: Apply mean filter (pooling) to f0, which may improve some hoarse sounds. Enabling this option will reduce inference speed. - `-eh` | `--enhance`: Whether to use NSF_HIFIGAN enhancer. This option has certain effect on sound quality enhancement for some models with few training sets, but has negative effect on well-trained models, so it is turned off by default. ## 🤔 Optional Settings If the results from the previous section are satisfactory, or if you didn't understand what is being discussed in the following section, you can skip it, and it won't affect the model usage. (These optional settings have a relatively small impact, and they may have some effect on certain specific data, but in most cases, the difference may not be noticeable.) ### Automatic f0 prediction During the 4.0 model training, an f0 predictor is also trained, which can be used for automatic pitch prediction during voice conversion. However, if the effect is not good, manual pitch prediction can be used instead. But please do not enable this feature when converting singing voice as it may cause serious pitch shifting! - Set `auto_predict_f0` to true in inference_main. ### Cluster-based timbre leakage control Introduction: The clustering scheme can reduce timbre leakage and make the trained model sound more like the target's timbre (although this effect is not very obvious), but using clustering alone will lower the model's clarity (the model may sound unclear). Therefore, this model adopts a fusion method to linearly control the proportion of clustering and non-clustering schemes. In other words, you can manually adjust the ratio between "sounding like the target's timbre" and "being clear and articulate" to find a suitable trade-off point. The existing steps before clustering do not need to be changed. All you need to do is to train an additional clustering model, which has a relatively low training cost. - Training process: - Train on a machine with good CPU performance. According to my experience, it takes about 4 minutes to train each speaker on a Tencent Cloud machine with 6-core CPU. - Execute `python cluster/train_cluster.py`. The output model will be saved in `logs/44k/kmeans_10000.pt`. - Inference process: - Specify `cluster_model_path` in `inference_main.py`. - Specify `cluster_infer_ratio` in `inference_main.py`, where `0` means not using clustering at all, `1` means only using clustering, and usually `0.5` is sufficient. ### F0 mean filtering Introduction: The mean filtering of F0 can effectively reduce the hoarse sound caused by the predicted fluctuation of pitch (the hoarse sound caused by reverb or harmony can not be eliminated temporarily). This function has been greatly improved on some songs. However, some songs are out of tune. If the song appears dumb after reasoning, it can be considered to open. - Set `f0_mean_pooling` to true in `inference_main.py` ### [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/svc-develop-team/so-vits-svc/blob/4.0/sovits4_for_colab.ipynb) [sovits4_for_colab.ipynb](https://colab.research.google.com/github/svc-develop-team/so-vits-svc/blob/4.0/sovits4_for_colab.ipynb) **[23/03/16] No longer need to download hubert manually** **[23/04/14] Support NSF_HIFIGAN enhancer** ## 📤 Exporting to Onnx Use [onnx_export.py](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/onnx_export.py) - Create a folder named `checkpoints` and open it - Create a folder in the `checkpoints` folder as your project folder, naming it after your project, for example `aziplayer` - Rename your model as `model.pth`, the configuration file as `config.json`, and place them in the `aziplayer` folder you just created - Modify `"NyaruTaffy"` in `path = "NyaruTaffy"` in [onnx_export.py](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/onnx_export.py) to your project name, `path = "aziplayer"` - Run [onnx_export.py](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/onnx_export.py) - Wait for it to finish running. A `model.onnx` will be generated in your project folder, which is the exported model. ### UI support for Onnx models - [MoeSS](https://github.com/NaruseMioShirakana/MoeSS) - [Hubert4.0](https://huggingface.co/NaruseMioShirakana/MoeSS-SUBModel) Note: For Hubert Onnx models, please use the models provided by MoeSS. Currently, they cannot be exported on their own (Hubert in fairseq has many unsupported operators and things involving constants that can cause errors or result in problems with the input/output shape and results when exported.) CppDataProcess are some functions to preprocess data used in MoeSS ## ☀️ Previous contributors For some reason the author deleted the original repository. Because of the negligence of the organization members, the contributor list was cleared because all files were directly reuploaded to this repository at the beginning of the reconstruction of this repository. Now add a previous contributor list to README.md. *Some members have not listed according to their personal wishes.* <table> <tr> <td align="center"><a href="https://github.com/MistEO"><img src="https://avatars.githubusercontent.com/u/18511905?v=4" width="100px;" alt=""/><br /><sub><b>MistEO</b></sub></a><br /></td> <td align="center"><a href="https://github.com/XiaoMiku01"><img src="https://avatars.githubusercontent.com/u/54094119?v=4" width="100px;" alt=""/><br /><sub><b>XiaoMiku01</b></sub></a><br /></td> <td align="center"><a href="https://github.com/ForsakenRei"><img src="https://avatars.githubusercontent.com/u/23041178?v=4" width="100px;" alt=""/><br /><sub><b>しぐれ</b></sub></a><br /></td> <td align="center"><a href="https://github.com/TomoGaSukunai"><img src="https://avatars.githubusercontent.com/u/25863522?v=4" width="100px;" alt=""/><br /><sub><b>TomoGaSukunai</b></sub></a><br /></td> <td align="center"><a href="https://github.com/Plachtaa"><img src="https://avatars.githubusercontent.com/u/112609742?v=4" width="100px;" alt=""/><br /><sub><b>Plachtaa</b></sub></a><br /></td> <td align="center"><a href="https://github.com/zdxiaoda"><img src="https://avatars.githubusercontent.com/u/45501959?v=4" width="100px;" alt=""/><br /><sub><b>zd小达</b></sub></a><br /></td> <td align="center"><a href="https://github.com/Archivoice"><img src="https://avatars.githubusercontent.com/u/107520869?v=4" width="100px;" alt=""/><br /><sub><b>凍聲響世</b></sub></a><br /></td> </tr> </table> ## 📚 Some legal provisions for reference #### Any country, region, organization, or individual using this project must comply with the following laws. #### 《民法典》 ##### 第一千零一十九条 任何组织或者个人不得以丑化、污损,或者利用信息技术手段伪造等方式侵害他人的肖像权。未经肖像权人同意,不得制作、使用、公开肖像权人的肖像,但是法律另有规定的除外。未经肖像权人同意,肖像作品权利人不得以发表、复制、发行、出租、展览等方式使用或者公开肖像权人的肖像。对自然人声音的保护,参照适用肖像权保护的有关规定。 ##### 第一千零二十四条 【名誉权】民事主体享有名誉权。任何组织或者个人不得以侮辱、诽谤等方式侵害他人的名誉权。 ##### 第一千零二十七条 【作品侵害名誉权】行为人发表的文学、艺术作品以真人真事或者特定人为描述对象,含有侮辱、诽谤内容,侵害他人名誉权的,受害人有权依法请求该行为人承担民事责任。行为人发表的文学、艺术作品不以特定人为描述对象,仅其中的情节与该特定人的情况相似的,不承担民事责任。 #### 《[中华人民共和国宪法](http://www.gov.cn/guoqing/2018-03/22/content_5276318.htm)》 #### 《[中华人民共和国刑法](http://gongbao.court.gov.cn/Details/f8e30d0689b23f57bfc782d21035c3.html?sw=%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E5%88%91%E6%B3%95)》 #### 《[中华人民共和国民法典](http://gongbao.court.gov.cn/Details/51eb6750b8361f79be8f90d09bc202.html)》 ## 💪 Thanks to all contributors for their efforts <a href="https://github.com/svc-develop-team/so-vits-svc/graphs/contributors" target="_blank"> <img src="https://contrib.rocks/image?repo=svc-develop-team/so-vits-svc" /> </a>
Deniskin/essays_small_2000
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer metrics: - rouge - precision - recall - f1 model-index: - name: xsum_and_billsum_and_old results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xsum_and_billsum_and_old This model is a fine-tuned version of [psin/xsum_and_billsum](https://huggingface.co/psin/xsum_and_billsum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6673 - Rouge1: 0.078 - Rouge2: 0.0127 - Rougel: 0.0648 - Rougelsum: 0.065 - Gen Len: 19.0 - Precision: 0.0673 - Recall: -0.2134 - F1: -0.0 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:-------:|:----:|:------------------------------------------------------------------:| | No log | 1.0 | 42 | 3.7474 | 0.0694 | 0.0093 | 0.0572 | 0.0575 | 19.0 | 0.0475 | -0.2208 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 2.0 | 84 | 3.7070 | 0.0748 | 0.0139 | 0.0621 | 0.0623 | 19.0 | 0.0667 | -0.2237 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 3.0 | 126 | 3.6862 | 0.0738 | 0.0125 | 0.0628 | 0.0631 | 19.0 | 0.0577 | -0.2232 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 4.0 | 168 | 3.6754 | 0.0774 | 0.0129 | 0.0646 | 0.0649 | 19.0 | 0.0747 | -0.2113 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 5.0 | 210 | 3.6695 | 0.0775 | 0.0127 | 0.0644 | 0.0647 | 19.0 | 0.0733 | -0.2139 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 6.0 | 252 | 3.6673 | 0.078 | 0.0127 | 0.0648 | 0.065 | 19.0 | 0.0673 | -0.2134 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Deniskin/essays_small_2000i
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - text-classification language: - pt widget: - text: "I love AutoTrain 🤗" datasets: - speedppc/autotrain-data-keyword-intent-pg-v1 co2_eq_emissions: emissions: 0.7171064425219756 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 57044132206 - CO2 Emissions (in grams): 0.7171 ## Validation Metrics - Loss: 0.956 - Accuracy: 0.623 - Macro F1: 0.630 - Micro F1: 0.623 - Weighted F1: 0.624 - Macro Precision: 0.642 - Micro Precision: 0.623 - Weighted Precision: 0.630 - Macro Recall: 0.624 - Micro Recall: 0.623 - Weighted Recall: 0.623 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/speedppc/autotrain-keyword-intent-pg-v1-57044132206 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("speedppc/autotrain-keyword-intent-pg-v1-57044132206", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("speedppc/autotrain-keyword-intent-pg-v1-57044132206", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
DeskDown/MarianMixFT_en-hi
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Please use at your own risk. The merging sources are ChilloutMix, ReV Animated, and waifu diffusion-v1-2, so please check them yourself and use them according to the licenses and terms of each distributor. It would be nice if the prompt was something like art by the artist's name Example: 1girl, art by renoir, art by mucha, art by rembrandt *Maybe it would be better to put the name of the era, art movement or style rather than the name of the artist? Also, if you use this, there is a high probability that a picture frame will appear around the perimeter, so it is recommended to use it negatively (picture frame: 1.4). Example: 1girl, art by symbolism, art by Impressionism, art by Romanticism, art by neoclassicism ------------------------------------------------------------------------------------------------------------ 利用に関しては自己責任でお願いします。 マージ元はChilloutMixとReV Animatedとwaifu diffusion-v1-2なので、各自で確認して各配布元のライセンスと規約に従って使ってね。 プロンプトは art by 画家の名前 みたいにするといい感じ 例:1girl,art by renoir,art by mucha,art by rembrandt ※もしかしたら作家名よりも時代や芸術運動や様式の名前入れたほうがいいかも? あとこれ使うと高確率で外周に額縁が出るのでネガティブに(picture frame:1.4)推奨 例:1girl,art by symbolism,art by Impressionism,art by Romanticism,art by neoclassicism
DeskDown/MarianMixFT_en-id
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m datasets: - gsdf/EasyNegative - Duskfallcrew/DuskfallCrewArtStyle_Lora language: - en metrics: - accuracy library_name: diffusers pipeline_tag: text-to-image tags: - anime - art - stable diffusion --- Child model of Neneko Splat: https://huggingface.co/Duskfallcrew/N3N3K0-Spl4T Join our Reddit: https://www.reddit.com/r/earthndusk/ If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/Da7s8d3KJ7 Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38 We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew Generation sites MAY use this as long as request/credit is given. We are NOT RESPONSIBLE FOR YOUR use/downstream anything with this model. DO NOT PRODUCE ILLEGAL CONTENT WITH THIS MODEL - we're still not responsible, we just said don't do it. DO USE THIS. Do feel free to ask for the safe tensors in another repo! Samples Here: https://huggingface.co/Duskfallcrew/Eggnog-Milkshake/tree/main/Eggnog%20Samples ![EggnogMilkshake](https://huggingface.co/Duskfallcrew/Eggnog-Milkshake/resolve/main/Eggnog%20Samples/01983-EggnogMilkshake_1957832516.png) ![EggnogMilkshake](https://huggingface.co/Duskfallcrew/Eggnog-Milkshake/resolve/main/Eggnog%20Samples/01993-EggnogMilkshake_3316908143.png) ![EggnogMilkshake](https://huggingface.co/Duskfallcrew/Eggnog-Milkshake/resolve/main/Eggnog%20Samples/01991-EggnogMilkshake_3885795197.png) ![EggnogMilkshake](https://huggingface.co/Duskfallcrew/Eggnog-Milkshake/resolve/main/Eggnog%20Samples/01981-EggnogMilkshake_1854440280.png)
DeskDown/MarianMixFT_en-ms
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: openrail --- Art model is based Elysium_v1 & WaifuDiffusion1.2 + WaifuDiffusion1.3.5 illust model is based Elysium_v1 + WaifuDiffusion1.2 & WaifuDiffusion1.3.5 Example settings: Prompt: victorian,from below,milf,nude,white langerie,((detailed pupils)), ((muscular)), (thick thigh), elegant pose, (ruffled neckwear) , (white panties),((oil painting)), smirk,(intricate),(high detail), ((sharp contrast)), (vivid colors),(nier), (wlop), norman rockwell,(back focus),bedroom,dynamic angle Negative prompt: anime, (frame:1.1),(nipple:1.1), (ugly:1.3), (duplicate:1.2), (morbid:1.1), (mutilated:1.1), (tranny:1.2), (trans:1.2), (transsexual:1.2), (hermaphrodite), [out of frame], extra fingers, mutated hands, (poorly drawn hands:1.1), (poorly drawn face:1.1), (mutation:1.1), (deformed:1.2), (ugly:1.1), blurry, (bad anatomy:1.1), (bad proportions:1.2), (extra limbs:1.1), cloned face, (disfigured:1.2),(more than 2 nipples:1.2), [[[adult]]], ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), (missing arms:1.1), (missing legs:1.1), (extra arms:1.2), (extra legs:1.2), mutated hands, (fused fingers), (too many fingers), (long neck:1.2)
DeskDown/MarianMixFT_en-th
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: mt5-aym-lex results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-aym-lex This model is a fine-tuned version of [alvations/mt5-aym-lex](https://huggingface.co/alvations/mt5-aym-lex) on the None dataset. It achieves the following results on the evaluation set: - Bleu: 3.1238 - Chrf: 24.4605 - Gen Len: 17.3872 - Loss: 0.1883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Gen Len | Validation Loss | |:-------------:|:-----:|:-----:|:------:|:-------:|:-------:|:---------------:| | 0.067 | 4.86 | 20000 | 2.9344 | 24.2586 | 17.5005 | 0.1844 | | 0.065 | 9.71 | 40000 | 3.1238 | 24.4605 | 17.3872 | 0.1883 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DeskDown/MarianMixFT_en-vi
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: polyglot-12.8b-koalpaca-v1.1b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # polyglot-12.8b-koalpaca-v1.1b This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.12.1
Devrim/prism-default
[ "license:mit" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 491.31 +/- 47.72 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
DevsIA/imagenes
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 datasets: - mosaicml/dolly_hhrlhf language: - en library_name: transformers pipeline_tag: text-generation --- # VMware/open-llama-0.3T-7B-instruct-dolly-hhrlhf Fully Open Source, Commerically viable. The instruction dataset, [mosaicml/dolly_hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) is under cc-by-sa-3.0, and the Language Model ([openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt/tree/main/open_llama_7b_preview_300bt_transformers_weights)) is under apache-2.0 License. ## Use in Transformers Please load the tokenizer with 'add_bos_token = True' parameter as the underlying OpenLLaMa model and this model were trained with a BOS token. ``` import os import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'VMware/open-llama-0.3T-7B-instruct-dolly-hhrlhf' tokenizer = AutoTokenizer.from_pretrained(model_name, add_bos_token = True) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential') prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" prompt= 'how do I bake a cake?' inputt = prompt_template.format(instruction= prompt) input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda") output1 = model.generate(input_ids, max_length=512) input_length = input_ids.shape[1] output1 = output1[:, input_length:] output= tokenizer.decode(output1[0]) print(output) ''' Baking a cake is a simple process. You will need to prepare a cake mixture, then bake it in the oven. You can add various ingredients to the cake mixture, such as fruit, nuts, or spices, to make it flavorful. Baking a cake can be fun, as it creates a delicious dessert!</s> ''' ``` ## Drawbacks <ul> <li>The model was trained on a partially trained Open-LLaMA checkpoint. (300B tokens). </ul> ## Evaluation <B>TODO</B>
DewiBrynJones/wav2vec2-large-xlsr-welsh
[ "cy", "dataset:common_voice", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 28.6353 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4499 - Rouge1: 28.6353 - Rouge2: 8.007 - Rougel: 22.5444 - Rougelsum: 22.5477 - Gen Len: 18.8209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6885 | 1.0 | 25506 | 2.4499 | 28.6353 | 8.007 | 22.5444 | 22.5477 | 18.8209 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DheerajPranav/Dialo-GPT-Rick-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0334 - Train Accuracy: 0.9893 - Validation Loss: 0.1265 - Validation Accuracy: 0.9599 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0334 | 0.9893 | 0.1265 | 0.9599 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.7.0 - Datasets 2.10.1 - Tokenizers 0.12.1
Dhruva/Interstellar
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks sheep tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - jacksonchen/dreambooth_lora_xiyangyang2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks sheep using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True.
DiegoBalam12/institute_classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ur tags: - sentiment analysis --- # Sentiment Binary Classifier for Urdu ## muril_base_cased_urdu_sentiment Base model is [google/muril-base-cased](https://huggingface.co/google/muril-base-cased), a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. Urdu sentiment analysis dataset is from [mirfan899](https://github.com/mirfan899/Urdu/tree/master/sentiment). ## Usage ### example: ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification # Load the model and tokenizer tokenizer = AutoTokenizer.from_pretrained("google/muril-base-cased") model = AutoModelForSequenceClassification.from_pretrained("MichaelHuang/muril_base_cased_urdu_sentiment") # Define the input text text = ''' لیکن مسٹر پوتن نے یہ بھی کہا کہ یہ منصوبہ اسی وقت پیش کیا جا سکتا ہے جب لوگ 'مغرب اور کیئو میں' اس کے لیے تیار ہوں۔ روسی رہنما نے منگل کو ماسکو میں چینی صدر شی جن پنگ سے ملاقات کی جس میں روس یوکرین جنگ اور دونوں ممالک کے درمیان تعلقات پر تبادلہ خیال کیا گیا۔ گذشتہ ماہ شائع ہونے والے چین کے منصوبے میں واضح طور پر روس سے یوکرین چھوڑنے کا مطالبہ نہیں کیا گیا ہے۔ ''' # Tokenize the input text inputs = tokenizer(text, return_tensors='pt') # Make a prediction outputs = model(**inputs) predicted_class = torch.argmax(outputs.logits).item() # Print the predicted class if predicted_class == 1: print('Positive') else: print('Negative') ``` ### Training results | eval_loss | epoch | step | eval_accuracy | |:-------------:|:-----:|:----:|:--------:| | 0.29 | 1.0 | 3000 | 0.89 | | 0.32 | 2.0 | 6000 | 0.91 | | 0.33 | 3.0 | 9000 | 0.91 |
Digakive/Hsgshs
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - samsum metrics: - rouge - precision - recall - f1 model-index: - name: xsum_and_billsum_and_samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: test args: samsum metrics: - name: Rouge1 type: rouge value: 0.3798 - name: Precision type: precision value: 0.4394 - name: Recall type: recall value: 0.3237 - name: F1 type: f1 value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xsum_and_billsum_and_samsum This model is a fine-tuned version of [psin/xsum_and_billsum](https://huggingface.co/psin/xsum_and_billsum) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.8894 - Rouge1: 0.3798 - Rouge2: 0.15 - Rougel: 0.316 - Rougelsum: 0.3158 - Gen Len: 16.1416 - Precision: 0.4394 - Recall: 0.3237 - F1: 0.0 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:---:|:------------------------------------------------------------------:| | 2.1137 | 1.0 | 1842 | 1.8894 | 0.3798 | 0.15 | 0.316 | 0.3158 | 16.1416 | 0.4394 | 0.3237 | 0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Dizoid/Lll
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - squad tags: - question-generation widget: - text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>" - text: "Simple is better than <hl> complex <hl>. </s>" license: mit --- ## T5 for question-generation This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("question-generation") nlp("42 is the answer to life, universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}] ```
Dmitriiserg/Pxd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - image-classification - timm library_tag: timm --- # Model card for danielshinsony/resnet18-random
Dmitry12/sber
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ckb license: mit tags: - hf-tts-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: SpeechT5 tts ckb0- Saber Molaei results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 tts ckb0- Saber Molaei This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6144 | 3.15 | 1000 | 0.5760 | | 0.5881 | 6.31 | 2000 | 0.5463 | | 0.5694 | 9.46 | 3000 | 0.5359 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Waynehillsdev/Wayne_NLP_mT5
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
Access to model ShawnGGG/perfect is restricted and you are not in the authorized list. Visit https://huggingface.co/ShawnGGG/perfect to ask for access.
Doogie/Waynehills-KE-T5-doogie
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - gozfarb/ShareGPT_Vicuna_unfiltered --- ## LoRA Info: Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is basically done now. Feel free to try it!~ **Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like: ``` ### Instruction: <prompt> (without the <>) ### Response: ``` Current upload: Fully trained adapter model (3 epochs). Secondary upload: checkpoint of epoch 2.97 (of 3) Thanks to MetaIX for initial seemingly successful testing of the first uploaded checkpoint (epoch 0.8) as well as epoch 1. ## Benchmarks **wikitext2:** 4.372413635253906 **ptb-new:** 24.69171714782715 **c4-new:** 6.469308853149414 Results generated with GPTQ evals (not quantized) thanks to [Neko-Institute-of-Science](https://huggingface.co/Neko-Institute-of-Science)
Waynehillsdev/Waynehills_summary_tensorflow
[ "tf", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # nikcheerla/nooks-amd-detection-v4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nikcheerla/nooks-amd-detection-v4") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 language: - en tags: - nlp ---
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.95 +/- 15.54 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
albert-base-v1
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38,156
2023-05-10T06:12:14Z
--- license: mit tags: - generated_from_trainer model-index: - name: Finetuned_bert_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned_bert_model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5644 - Rmse: 0.6048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6429 | 4.0 | 500 | 0.5644 | 0.6048 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
albert-large-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
687
2023-05-10T06:16:48Z
--- tags: - generated_from_trainer metrics: - rouge - precision - recall - f1 model-index: - name: xsum_and_billsum_and_samsum_old results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xsum_and_billsum_and_samsum_old This model is a fine-tuned version of [psin/xsum_and_billsum_and_samsum](https://huggingface.co/psin/xsum_and_billsum_and_samsum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6572 - Rouge1: 0.0873 - Rouge2: 0.0186 - Rougel: 0.0712 - Rougelsum: 0.0711 - Gen Len: 19.0 - Precision: 0.069 - Recall: -0.2018 - F1: -0.0 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:-------:|:----:|:------------------------------------------------------------------:| | No log | 1.0 | 42 | 3.7360 | 0.0789 | 0.0152 | 0.0679 | 0.068 | 19.0 | 0.071 | -0.2095 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 2.0 | 84 | 3.6936 | 0.085 | 0.0193 | 0.071 | 0.0712 | 19.0 | 0.0898 | -0.2007 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 3.0 | 126 | 3.6739 | 0.0854 | 0.0202 | 0.0712 | 0.0713 | 19.0 | 0.0973 | -0.1935 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 4.0 | 168 | 3.6643 | 0.0865 | 0.0194 | 0.0709 | 0.0709 | 19.0 | 0.0845 | -0.197 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 5.0 | 210 | 3.6589 | 0.087 | 0.0186 | 0.0711 | 0.0711 | 19.0 | 0.0698 | -0.2012 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | | No log | 6.0 | 252 | 3.6572 | 0.0873 | 0.0186 | 0.0712 | 0.0711 | 19.0 | 0.069 | -0.2018 | -0.0 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.1)-rescaled | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
bert-base-german-dbmdz-uncased
[ "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68,305
2023-05-10T06:36:00Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://d1okzptojspljx.cloudfront.net/generations/8589140601669473451.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "4o7gsdify2v" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/4o7gsdify2v) Credits: [View credits](https://civitai.com/?query=model_search) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "4o7gsdify2v", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
bert-large-cased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,214
2023-05-10T06:39:21Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://d1okzptojspljx.cloudfront.net/generations/8589140601669473451.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "qefuybvsidv" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/qefuybvsidv) Credits: [View credits](https://civitai.com/?query=model_search) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "qefuybvsidv", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
bert-large-cased-whole-word-masking
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,316
2023-05-10T06:39:43Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://d1okzptojspljx.cloudfront.net/generations/8589140601669473451.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "lvw78gef3fgs" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/lvw78gef3fgs) Credits: [View credits](https://civitai.com/?query=model_search) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "lvw78gef3fgs", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
bert-large-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388,769
2023-05-10T06:40:03Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://d1okzptojspljx.cloudfront.net/generations/8589140601669473451.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "vn7e92vdv2" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/vn7e92vdv2) Credits: [View credits](https://civitai.com/?query=model_search) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "vn7e92vdv2", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
bert-large-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,058,496
2023-05-10T06:42:51Z
--- license: apache-2.0 widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: 音乐表演, 体育运动 example_title: 猫和狗 pipeline_tag: zero-shot-classification --- [**中文说明**](README_CN.md) | [**English**](README.md) # Introduction This project aims to provide a better Chinese CLIP model. The training data used in this project consists of publicly accessible image URLs and related Chinese text descriptions, totaling 400 million. After screening, we ultimately used 100 million data for training. This project is produced by QQ-ARC Joint Lab, Tencent PCG. We have also open-sourced our code on GitHub, [QA-CLIP](https://github.com/TencentARC-QQ/QA-CLIP), and welcome to star! <br><br> # Models and Results <span id="model_card"></span> ## Model Card QA-CLIP currently has three different open-source models of different sizes, and their model information and download links are shown in the table below: <table border="1" width="100%"> <tr align="center"> <th>Model</th><th>Ckp</th><th>Params</th><th>Vision</th><th>Params of Vision</th><th>Text</th><th>Params of Text</th><th>Resolution</th> </tr> <tr align="center"> <td>QA-CLIP<sub>RN50</sub></td><td><a href="https://huggingface.co/TencentARC/QA-CLIP/resolve/main/QA-CLIP-RN50.pt">Download</a></td><td>77M</td><td>ResNet50</td><td>38M</td><td>RBT3</td><td>39M</td><td>224</td> </tr> <tr align="center"> <td>QA-CLIP<sub>ViT-B/16</sub></td><td><a href="https://huggingface.co/TencentARC/QA-CLIP/resolve/main/QA-CLIP-base.pt">Download</a></td><td>188M</td><td>ViT-B/16</td><td>86M</td><td>RoBERTa-wwm-Base</td><td>102M</td><td>224</td> </tr> <tr align="center"> <td>QA-CLIP<sub>ViT-L/14</sub></td><td><a href="https://huggingface.co/TencentARC/QA-CLIP/resolve/main/QA-CLIP-large.pt">Download</a></td><td>406M</td><td>ViT-L/14</td><td>304M</td><td>RoBERTa-wwm-Base</td><td>102M</td><td>224</td> </tr> </table> <br> ## Results We conducted zero-shot tests on [MUGE Retrieval](https://tianchi.aliyun.com/muge), [Flickr30K-CN](https://github.com/li-xirong/cross-lingual-cap), and [COCO-CN](https://github.com/li-xirong/coco-cn) datasets for image-text retrieval tasks. For the image zero-shot classification task, we tested on the ImageNet dataset. The test results are shown in the table below: **Flickr30K-CN Zero-shot Retrieval (Official Test Set)**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>RN50</sub></td><td>48.8</td><td>76.0</td><td>84.6</td><td>60.0</td><td>85.9</td><td>92.0</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>50.5</b></td><td><b>77.4</b></td><td><b>86.1</b></td><td><b>67.1</b></td><td><b>87.9</b></td><td><b>93.2</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>62.7</td><td>86.9</td><td>92.8</td><td>74.6</td><td>93.5</td><td>97.1</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>63.8</b></td><td><b>88.0</b></td><td><b>93.2</b></td><td><b>78.4</b></td><td><b>96.1</b></td><td><b>98.5</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>68.0</td><td>89.7</td><td>94.4</td><td>80.2</td><td>96.6</td><td>98.2</td> </tr> <tr align="center"> <td width="120%">AltClip<sub>ViT-L/14</sub></td><td><b>69.7</b></td><td>90.1</td><td><b>94.8</b></td><td>84.8</td><td>97.7</td><td>99.1</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td>69.3</td><td><b>90.3</b></td><td>94.7</td><td><b>85.3</b></td><td><b>97.9</b></td><td><b>99.2</b></td> </tr> </table> <br> **MUGE Zero-shot Retrieval (Official Validation Set)**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>RN50</sub></td><td>42.6</td><td>68.5</td><td>78.0</td><td>30.0</td><td>56.2</td><td>66.9</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>44.0</b></td><td><b>69.9</b></td><td><b>79.5</b></td><td><b>32.4</b></td><td><b>59.5</b></td><td><b>70.3</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>52.1</td><td>76.7</td><td>84.4</td><td>38.7</td><td>65.6</td><td>75.1</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>53.2</b></td><td><b>77.7</b></td><td><b>85.1</b></td><td><b>40.7</b></td><td><b>68.2</b></td><td><b>77.2</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>56.4</td><td>79.8</td><td>86.2</td><td>42.6</td><td>69.8</td><td>78.6</td> </tr> <tr align="center"> <td width="120%">AltClip<sub>ViT-L/14</sub></td><td>29.6</td><td>49.9</td><td>58.8</td><td>21.4</td><td>42.0</td><td>51.9</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>57.4</b></td><td><b>81.0</b></td><td><b>87.7</b></td><td><b>45.5</b></td><td><b>73.0</b></td><td><b>81.4</b></td> </tr> </table> <br> **COCO-CN Zero-shot Retrieval (Official Test Set)**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>RN50</sub></td><td>48.1</td><td>81.3</td><td>90.5</td><td>50.9</td><td>81.1</td><td>90.5</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>50.1</b></td><td><b>82.5</b></td><td><b>91.7</b></td><td><b>56.7</b></td><td><b>85.2</b></td><td><b>92.9</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>62.2</td><td>87.1</td><td>94.9</td><td>56.3</td><td>84.0</td><td>93.3</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>62.9</b></td><td><b>87.7</b></td><td><b>94.7</b></td><td><b>61.5</b></td><td><b>87.6</b></td><td><b>94.8</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>64.9</td><td>88.8</td><td>94.2</td><td>60.6</td><td>84.4</td><td>93.1</td> </tr> <tr align="center"> <td width="120%">AltClip<sub>ViT-L/14</sub></td><td>63.5</td><td>87.6</td><td>93.5</td><td>62.6</td><td><b>88.5</b></td><td><b>95.9</b></td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>65.7</b></td><td><b>90.2</b></td><td><b>95.0</b></td><td><b>64.5</b></td><td>88.3</td><td>95.1</td> </tr> </table> <br> **Zero-shot Image Classification on ImageNet**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="1">ImageNet</th> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>RN50</sub></td><td>33.5</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>35.5</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>48.4</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>49.7</b></td> </tr> <tr align="center"> <td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>54.7</td> </tr> <tr align="center", style="background-color: Honeydew;"> <td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>55.8</b></td> </tr> </table> <br> <br><br> # Getting Started ## Installation Requirements Environment configuration requirements: * python >= 3.6.4 * pytorch >= 1.8.0 (with torchvision >= 0.9.0) * CUDA Version >= 10.2 Install required packages: ```bash cd /yourpath/QA-CLIP-main pip install --upgrade pip pip install -r requirements.txt ``` ## Inference Code ```bash export PYTHONPATH=/yourpath/QA-CLIP-main ``` Inference code example: ```python import torch from PIL import Image import clip as clip from clip import load_from_name, available_models print("Available models:", available_models()) # Available models: ['ViT-B-16', 'ViT-L-14', 'RN50'] device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./') model.eval() image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device) text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device) with torch.no_grad(): image_features = model.encode_image(image) text_features = model.encode_text(text) # Normalize the features. Please use the normalized features for downstream tasks. image_features /= image_features.norm(dim=-1, keepdim=True) text_features /= text_features.norm(dim=-1, keepdim=True) logits_per_image, logits_per_text = model.get_similarity(image, text) probs = logits_per_image.softmax(dim=-1).cpu().numpy() print("Label probs:", probs) ``` <br><br> ## Prediction and Evaluation ### Download Image-text Retrieval Test Dataset In Project <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b>, the test set has already been preprocessed. Here is the download link they provided: MUGE dataset:[download link](https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/datasets/MUGE.zip) Flickr30K-CN dataset:[download link](https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/datasets/Flickr30k-CN.zip) Additionally, obtaining the [COCO-CN](https://github.com/li-xirong/coco-cn) dataset requires applying to the original author. ### Download ImageNet Dataset Please download the raw data yourself,[Chinese Label](http://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/datasets/ImageNet-1K/label_cn.txt) and [English Label](http://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/datasets/ImageNet-1K/label.txt) are provided by Project <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b> ### Image-text Retrieval Evaluation The image-text retrieval evaluation code can be referred to as follows: ```bash split=test # Designate the computation of features for the valid or test set resume=your_ckp_path DATAPATH=your_DATAPATH dataset_name=Flickr30k-CN # dataset_name=MUGE python -u eval/extract_features.py \ --extract-image-feats \ --extract-text-feats \ --image-data="${DATAPATH}/datasets/${dataset_name}/lmdb/${split}/imgs" \ --text-data="${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl" \ --img-batch-size=32 \ --text-batch-size=32 \ --context-length=52 \ --resume=${resume} \ --vision-model=ViT-B-16 \ --text-model=RoBERTa-wwm-ext-base-chinese python -u eval/make_topk_predictions.py \ --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \ --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \ --top-k=10 \ --eval-batch-size=32768 \ --output="${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl" python -u eval/make_topk_predictions_tr.py \ --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \ --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \ --top-k=10 \ --eval-batch-size=32768 \ --output="${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl" python eval/evaluation.py \ ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl \ ${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl \ ${DATAPATH}/datasets/${dataset_name}/output1.json cat ${DATAPATH}/datasets/${dataset_name}/output1.json python eval/transform_ir_annotation_to_tr.py \ --input ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl python eval/evaluation_tr.py \ ${DATAPATH}/datasets/${dataset_name}/${split}_texts.tr.jsonl \ ${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl \ ${DATAPATH}/datasets/${dataset_name}/output2.json cat ${DATAPATH}/datasets/${dataset_name}/output2.json ``` ### ImageNet Zero-shot Classification The ImageNet zero-shot classification code can be referred to as follows ```bash bash scripts/zeroshot_eval.sh 0 \ ${DATAPATH} imagenet \ ViT-B-16 RoBERTa-wwm-ext-base-chinese \ ./pretrained_weights/QA-CLIP-base.pt ``` <br><br> # Huggingface Model and Online Demo We have open-sourced our model on the HuggingFace for easier access and utilization. Additionally, we have prepared a simple online demo for zero-shot classification, allowing everyone to experience it firsthand. We encourage you to give it a try! [⭐️QA-CLIP-ViT-B-16⭐️](https://huggingface.co/TencentARC/QA-CLIP-ViT-B-16) [⭐️QA-CLIP-ViT-L-14⭐️](https://huggingface.co/TencentARC/QA-CLIP-ViT-L-14) Here are some examples for demonstration: ![sample](./examples/sample.png) <br><br> # Acknowledgments The project code is based on implementation of <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b>, and we are very grateful for their outstanding open-source contributions. <br><br>
distilbert-base-cased-distilled-squad
[ "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
257,745
null
Access to model afriedegg/MenkeA is restricted and you are not in the authorized list. Visit https://huggingface.co/afriedegg/MenkeA to ask for access.
distilgpt2
[ "pytorch", "tf", "jax", "tflite", "rust", "coreml", "safetensors", "gpt2", "text-generation", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "transformers", "exbert", "license:apache-2.0", "model-index", "co2_eq_emissions", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,611,668
2023-05-10T06:59:55Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="manishankar5507/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
distilroberta-base
[ "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "fill-mask", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,342,240
2023-05-10T07:02:34Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.60 +/- 17.42 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # MohammedDhiyaEddine/emploitic-sentence-transformer-tsdae-distiluse This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('MohammedDhiyaEddine/emploitic-sentence-transformer-tsdae-distiluse') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('MohammedDhiyaEddine/emploitic-sentence-transformer-tsdae-distiluse') model = AutoModel.from_pretrained('MohammedDhiyaEddine/emploitic-sentence-transformer-tsdae-distiluse') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=MohammedDhiyaEddine/emploitic-sentence-transformer-tsdae-distiluse) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8506 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 3e-05 }, "scheduler": "warmuplinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AdapterHub/roberta-base-pf-social_i_qa
[ "roberta", "en", "dataset:social_i_qa", "arxiv:2104.08247", "adapter-transformers" ]
null
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer model-index: - name: sentiment-analysis-generic-fintuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-analysis-generic-fintuned This model is a fine-tuned version of [Seethal/sentiment_analysis_generic_dataset](https://huggingface.co/Seethal/sentiment_analysis_generic_dataset) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AdapterHub/roberta-base-pf-squad
[ "roberta", "en", "dataset:squad", "arxiv:2104.08247", "adapter-transformers", "question-answering", "adapterhub:qa/squad1" ]
question-answering
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: mit language: ja library_name: transformers tags: - pytorch - deberta - deberta-v2 - named entity recognition - named-entity-recognition - ner datasets: - wikipedia - cc100 - oscar metrics: - accuracy --- # このモデルはdeberta-v2-large-japaneseをファインチューニングして固有表現抽出(NER)に用いれるようにしたものです。 このモデルはdeberta-v2-large-japaneseを Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。 # This model is fine-tuned model for Named Entity Recognition (NER) which is based on deberta-v2-large-japanese This model is fine-tuned by using Wikipedia dataset. You could use this model for NER tasks. # How to use 使い方 transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。 以下のコードを実行することで、固有表現抽出タスクを解かせることができます。 please execute this code. ```python from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner') model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner') # 学習済みモデルの読み込み text=('昨日は東京で買い物をした') ner=pipeline('ner', model=model, tokenizer=tokenizer) result=ner(text) print(result) ``` # モデルの精度 accuracy of model 全体:0.7974729241877256     precision recall f1-score support その他の組織名 0.72  0.72 0.72 238 イベント名 0.73 0.85 0.79 215 人名  0.83   0.89 0.86 547 地名  0.79  0.80 0.80 446 政治的組織名 0.78   0.83 0.80 263 施設名  0.74   0.84 0.79 241 法人名  0.84  0.80 0.82 487 製品名  0.65   0.78 0.71 252    micro avg 0.77 0.82 0.80 2689   macro avg 0.76 0.82 0.79 2689    weighted avg 0.78 0.82 0.80 2689 # deberta-v2-base-japaneseとは? 日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。 京都大学黒橋研究室が公表されました。 # Model description This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR. # Acknowledgments 謝辞 モデルを公開してくださった京都大学黒橋研究室には感謝いたします。 I would like to thank Kurohashi Lab at Kyoto University.
AdapterHub/roberta-base-pf-sst2
[ "roberta", "en", "arxiv:2104.08247", "adapter-transformers", "text-classification", "adapterhub:sentiment/sst-2" ]
text-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AdapterHub/roberta-base-pf-trec
[ "roberta", "en", "dataset:trec", "arxiv:2104.08247", "adapter-transformers", "text-classification" ]
text-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
AlekseyKorshuk/horror-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- tags: - generated_from_trainer datasets: - doc2dial model-index: - name: zero-docalog results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zero-docalog This model is a fine-tuned version of [alistvt/zero-docalog](https://huggingface.co/alistvt/zero-docalog) on the doc2dial dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 30 - total_train_batch_size: 240 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
AlexMaclean/sentence-compression-roberta
[ "pytorch", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
Alexander-Learn/bert-finetuned-squad-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.57 +/- 27.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AlexeyYazev/my-awesome-model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: ultmrck --- ### ultmrck-object10-v4 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: ultmrck (use that on your prompt) ![ultmrck 0](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%281%29.jpg)![ultmrck 1](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%282%29.jpg)![ultmrck 2](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%283%29.jpg)![ultmrck 3](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%284%29.jpg)![ultmrck 4](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%285%29.jpg)![ultmrck 5](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%286%29.jpg)![ultmrck 6](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%287%29.jpg)![ultmrck 7](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%288%29.jpg)![ultmrck 8](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%289%29.jpg)![ultmrck 9](https://huggingface.co/wimvanhenden/ultmrck-object10-v4/resolve/main/concept_images/ultmrck_%2810%29.jpg)
AlgoveraAI/dcgan
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: AliCampbellKhaya/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Amitabh/doc-classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Amro-Kamal/gpt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-10T17:07:46Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: Libra2023/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Anamika/autonlp-fa-473312409
[ "pytorch", "roberta", "text-classification", "en", "dataset:Anamika/autonlp-data-fa", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: Actuary/ppo-DL-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AndreLiu1225/t5-news-summarizer
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # wand-duckstyle2 API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/9143458471683739991.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "wand-duckstyle2" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/wand-duckstyle2) Credits: [View credits](https://civitai.com/?query=wand-duckstyle2) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "wand-duckstyle2", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
AndreLiu1225/t5-news
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
18
null
--- tags: - text2text-generation - definition-modeling metrics: - rouge, bleu, bert-f1 model-index: - name: flan-t5-definition-en-xl results: [] language: - en widget: - text: "He ate a sweet apple. What is the definition of apple?" example_title: "Definition generation" - text: "The paper contains a number of original ideas about color perception. What is the definition of original?" example_title: "Definition generation" license: cc-by-sa-4.0 datasets: - marksverdhei/wordnet-definitions-en-2021 --- # FLAN-T5-Definition XL This model is a version of [FLAN-T5 XL](https://huggingface.co/google/flan-t5-xl) finetuned on a dataset of English definitions and usage examples. It generates definitions of English words in context. Its input is the usage example and the instruction question "What is the definiton of TARGET_WORD?" This project is a collaboration between the [Dialogue Modelling Group](https://dmg-illc.github.io/dmg/) at the University of Amsterdam and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo. ## Sizes: - [FLAN-T5-Definition Base (250M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-base) - [FLAN-T5-Definition Large (780M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-large) - [FLAN-T5-Definition XL (3B parameters)](https://huggingface.co/ltg/flan-t5-definition-en-xl) ## Model description See details in the paper `Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis` (ACL'2023) by Mario Giulianelli, Iris Luden, Raquel Fernandez and Andrey Kutuzov. ## Intended uses & limitations The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions. The fine-tuning datasets were limited to English. Although the original FLAN-T5 is a multilingual model, we did not thoroughly evaluate its ability to generate definitions in languages other than English. Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model. ## Training and evaluation data Three datasets were used to fine-tune the model: - *WordNet* ([Ishiwatari et al., NAACL 2019](https://aclanthology.org/N19-1350/)), also [available on HF](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021) - *Oxford dictionary or CHA* ([Gadetsky et al., ACL 2018](https://aclanthology.org/P18-2043/)) - English subset of *CodWoE* ([Mickus et al., SemEval 2022](https://aclanthology.org/2022.semeval-1.1/)) FLAN-T5-Definition XL achieves the following results on the WordNet test set: - BLEU: 32.81 - ROUGE-L: 52.21 - BERT-F1: 92.16 FLAN-T5-Definition XL achieves the following results on the Oxford dictionary test set: - BLEU: 18.69 - ROUGE-L: 38.72 - BERT-F1: 89.75 ## Training procedure FLAN-T5 XL was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+rocm5.1.1 - Datasets 2.4.0 - Tokenizers 0.12.1 ## Citation
Andrija/SRoBERTa-F
[ "pytorch", "tensorboard", "roberta", "fill-mask", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "dataset:cc100", "dataset:hrwac", "transformers", "masked-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
59
null
--- tags: - text2text-generation - definition-modeling metrics: - rouge, bleu, bert-f1 model-index: - name: flan-t5-definition-en-base results: [] language: - en widget: - text: "He ate a sweet apple. What is the definition of apple?" example_title: "Definition generation" - text: "The paper contains a number of original ideas about color perception. What is the definition of original?" example_title: "Definition generation" license: cc-by-sa-4.0 datasets: - marksverdhei/wordnet-definitions-en-2021 --- # FLAN-T5-Definition Base This model is a version of [FLAN-T5 Base](https://huggingface.co/google/flan-t5-base) finetuned on a dataset of English definitions and usage examples. It generates definitions of English words in context. Its input is the usage example and the instruction question "What is the definiton of TARGET_WORD?" This project is a collaboration between the [Dialogue Modelling Group](https://dmg-illc.github.io/dmg/) at the University of Amsterdam and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo. ## Sizes: - [FLAN-T5-Definition Base (250M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-base) - [FLAN-T5-Definition Large (780M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-large) - [FLAN-T5-Definition XL (3B parameters)](https://huggingface.co/ltg/flan-t5-definition-en-xl) ## Model description See details in the paper `Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis` (ACL'2023) by Mario Giulianelli, Iris Luden, Raquel Fernandez and Andrey Kutuzov. ## Intended uses & limitations The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions. The fine-tuning datasets were limited to English. Although the original FLAN-T5 is a multilingual model, we did not thoroughly evaluate its ability to generate definitions in languages other than English. Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model. ## Training and evaluation data Three datasets were used to fine-tune the model: - *WordNet* ([Ishiwatari et al., NAACL 2019](https://aclanthology.org/N19-1350/)), also [available on HF](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021) - *Oxford dictionary or CHA* ([Gadetsky et al., ACL 2018](https://aclanthology.org/P18-2043/)) - English subset of *CodWoE* ([Mickus et al., SemEval 2022](https://aclanthology.org/2022.semeval-1.1/)) FLAN-T5-Definition Base achieves the following results on the WordNet test set: - BLEU: 10.38 - ROUGE-L: 27.17 - BERT-F1: 88.22 FLAN-T5-Definition Base achieves the following results on the Oxford dictionary test set: - BLEU: 7.18 - ROUGE-L: 23.04 - BERT-F1: 86.90 ## Training procedure FLAN-T5 Base was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.5645 | 1.0 | 2740 | 2.2535 | 24.4437 | 6.4189 | 22.7949 | 22.7909 | 11.4969 | | 2.3501 | 2.0 | 5480 | 2.1642 | 25.6642 | 7.289 | 23.8689 | 23.8749 | 11.7150 | | 2.2516 | 3.0 | 8220 | 2.1116 | 26.4562 | 7.8955 | 24.6275 | 24.6376 | 11.7441 | | 2.1806 | 4.0 | 10960 | 2.0737 | 27.0392 | 8.2393 | 25.1555 | 25.1641 | 11.7930 | | 2.1233 | 5.0 | 13700 | 2.0460 | 27.2709 | 8.4244 | 25.3847 | 25.4003 | 11.9014 | | 2.0765 | 6.0 | 16440 | 2.0236 | 27.5456 | 8.6096 | 25.6321 | 25.6462 | 11.8113 | | 2.0371 | 7.0 | 19180 | 2.0047 | 27.7209 | 8.7277 | 25.7871 | 25.8084 | 11.6875 | | 2.0036 | 8.0 | 21920 | 1.9918 | 28.0431 | 8.9863 | 26.1072 | 26.1198 | 11.5487 | | 1.9752 | 9.0 | 24660 | 1.9788 | 28.1807 | 9.0219 | 26.1692 | 26.1886 | 11.7939 | | 1.9513 | 10.0 | 27400 | 1.9702 | 28.3204 | 9.1572 | 26.2955 | 26.3029 | 11.5936 | | 1.9309 | 11.0 | 30140 | 1.9640 | 28.4289 | 9.2845 | 26.4006 | 26.418 | 11.8371 | | 1.9144 | 12.0 | 32880 | 1.9571 | 28.4504 | 9.3406 | 26.4273 | 26.4384 | 11.6201 | | 1.9013 | 13.0 | 35620 | 1.9544 | 28.6319 | 9.3682 | 26.605 | 26.613 | 11.7067 | | 1.8914 | 14.0 | 38360 | 1.9512 | 28.6435 | 9.3976 | 26.5839 | 26.5918 | 11.7307 | | 1.8866 | 15.0 | 41100 | 1.9509 | 28.6111 | 9.3857 | 26.551 | 26.5648 | 11.7470 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
Andrija/SRoBERTaFastBPE-2
[ "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: willaoao/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # willaoao/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6185 - Validation Loss: 1.8316 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5877 | 2.4400 | 0 | | 1.9036 | 1.8316 | 1 | | 1.6185 | 1.8316 | 2 | ### Framework versions - Transformers 4.29.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Andrija/SRoBERTaFastBPE
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-conll2003 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: F1 type: f1 value: 0.948444966049124 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-conll2003 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0898 - F1: 0.9484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1415 | 1.0 | 439 | 0.0447 | 0.9367 | | 0.0429 | 2.0 | 878 | 0.0437 | 0.9310 | | 0.0259 | 3.0 | 1317 | 0.0534 | 0.9328 | | 0.0195 | 4.0 | 1756 | 0.0449 | 0.9429 | | 0.0146 | 5.0 | 2195 | 0.0484 | 0.9421 | | 0.0121 | 6.0 | 2634 | 0.0523 | 0.9392 | | 0.0099 | 7.0 | 3073 | 0.0500 | 0.9428 | | 0.0077 | 8.0 | 3512 | 0.0536 | 0.9423 | | 0.008 | 9.0 | 3951 | 0.0672 | 0.9254 | | 0.0079 | 10.0 | 4390 | 0.0589 | 0.9442 | | 0.007 | 11.0 | 4829 | 0.0669 | 0.9400 | | 0.0051 | 12.0 | 5268 | 0.0602 | 0.9409 | | 0.0052 | 13.0 | 5707 | 0.0639 | 0.9441 | | 0.0036 | 14.0 | 6146 | 0.0635 | 0.9431 | | 0.0033 | 15.0 | 6585 | 0.0858 | 0.9328 | | 0.0038 | 16.0 | 7024 | 0.0653 | 0.9478 | | 0.0047 | 17.0 | 7463 | 0.0689 | 0.9431 | | 0.0039 | 18.0 | 7902 | 0.0687 | 0.9442 | | 0.0031 | 19.0 | 8341 | 0.0687 | 0.9459 | | 0.0027 | 20.0 | 8780 | 0.0785 | 0.9424 | | 0.0047 | 21.0 | 9219 | 0.0654 | 0.9444 | | 0.0035 | 22.0 | 9658 | 0.0748 | 0.9454 | | 0.0021 | 23.0 | 10097 | 0.0714 | 0.9423 | | 0.003 | 24.0 | 10536 | 0.0730 | 0.9433 | | 0.0031 | 25.0 | 10975 | 0.0682 | 0.9417 | | 0.0021 | 26.0 | 11414 | 0.0762 | 0.9407 | | 0.0025 | 27.0 | 11853 | 0.0773 | 0.9391 | | 0.0019 | 28.0 | 12292 | 0.0739 | 0.9420 | | 0.0032 | 29.0 | 12731 | 0.0755 | 0.9413 | | 0.0023 | 30.0 | 13170 | 0.0755 | 0.9439 | | 0.0024 | 31.0 | 13609 | 0.0747 | 0.9456 | | 0.0018 | 32.0 | 14048 | 0.0730 | 0.9430 | | 0.0017 | 33.0 | 14487 | 0.0866 | 0.9385 | | 0.0019 | 34.0 | 14926 | 0.0695 | 0.9440 | | 0.0016 | 35.0 | 15365 | 0.0818 | 0.9442 | | 0.0034 | 36.0 | 15804 | 0.0750 | 0.9459 | | 0.0019 | 37.0 | 16243 | 0.0808 | 0.9414 | | 0.0013 | 38.0 | 16682 | 0.0797 | 0.9422 | | 0.0015 | 39.0 | 17121 | 0.0814 | 0.9394 | | 0.0019 | 40.0 | 17560 | 0.0757 | 0.9415 | | 0.0011 | 41.0 | 17999 | 0.0778 | 0.9453 | | 0.0011 | 42.0 | 18438 | 0.0825 | 0.9407 | | 0.0012 | 43.0 | 18877 | 0.0767 | 0.9458 | | 0.0022 | 44.0 | 19316 | 0.0865 | 0.9396 | | 0.0009 | 45.0 | 19755 | 0.0826 | 0.9459 | | 0.0008 | 46.0 | 20194 | 0.0819 | 0.9473 | | 0.0017 | 47.0 | 20633 | 0.0844 | 0.9420 | | 0.0015 | 48.0 | 21072 | 0.0827 | 0.9448 | | 0.0014 | 49.0 | 21511 | 0.0800 | 0.9464 | | 0.0008 | 50.0 | 21950 | 0.0770 | 0.9474 | | 0.0011 | 51.0 | 22389 | 0.0766 | 0.9471 | | 0.0006 | 52.0 | 22828 | 0.0896 | 0.9424 | | 0.0011 | 53.0 | 23267 | 0.0866 | 0.9425 | | 0.001 | 54.0 | 23706 | 0.0853 | 0.9426 | | 0.0007 | 55.0 | 24145 | 0.0831 | 0.9462 | | 0.0008 | 56.0 | 24584 | 0.0805 | 0.9457 | | 0.0008 | 57.0 | 25023 | 0.0866 | 0.9438 | | 0.0008 | 58.0 | 25462 | 0.0822 | 0.9421 | | 0.0011 | 59.0 | 25901 | 0.0837 | 0.9417 | | 0.0007 | 60.0 | 26340 | 0.0823 | 0.9466 | | 0.0008 | 61.0 | 26779 | 0.0825 | 0.9425 | | 0.0004 | 62.0 | 27218 | 0.0825 | 0.9433 | | 0.0005 | 63.0 | 27657 | 0.0826 | 0.9435 | | 0.0004 | 64.0 | 28096 | 0.0838 | 0.9437 | | 0.0008 | 65.0 | 28535 | 0.0909 | 0.9424 | | 0.0004 | 66.0 | 28974 | 0.0825 | 0.9464 | | 0.0004 | 67.0 | 29413 | 0.0917 | 0.9454 | | 0.0004 | 68.0 | 29852 | 0.0843 | 0.9487 | | 0.0005 | 69.0 | 30291 | 0.0825 | 0.9481 | | 0.0003 | 70.0 | 30730 | 0.0825 | 0.9456 | | 0.0005 | 71.0 | 31169 | 0.0835 | 0.9460 | | 0.0003 | 72.0 | 31608 | 0.0906 | 0.9481 | | 0.0001 | 73.0 | 32047 | 0.0916 | 0.9471 | | 0.0007 | 74.0 | 32486 | 0.0885 | 0.9460 | | 0.0003 | 75.0 | 32925 | 0.0879 | 0.9481 | | 0.0001 | 76.0 | 33364 | 0.0871 | 0.9505 | | 0.0002 | 77.0 | 33803 | 0.0906 | 0.9486 | | 0.0003 | 78.0 | 34242 | 0.0934 | 0.9469 | | 0.0002 | 79.0 | 34681 | 0.0911 | 0.9466 | | 0.0003 | 80.0 | 35120 | 0.0871 | 0.9489 | | 0.0003 | 81.0 | 35559 | 0.0876 | 0.9494 | | 0.0002 | 82.0 | 35998 | 0.0884 | 0.9482 | | 0.0001 | 83.0 | 36437 | 0.0910 | 0.9469 | | 0.0002 | 84.0 | 36876 | 0.0874 | 0.9473 | | 0.0002 | 85.0 | 37315 | 0.0864 | 0.9463 | | 0.0001 | 86.0 | 37754 | 0.0878 | 0.9472 | | 0.0002 | 87.0 | 38193 | 0.0836 | 0.9500 | | 0.0001 | 88.0 | 38632 | 0.0861 | 0.9495 | | 0.0001 | 89.0 | 39071 | 0.0869 | 0.9503 | | 0.0001 | 90.0 | 39510 | 0.0878 | 0.9480 | | 0.0001 | 91.0 | 39949 | 0.0878 | 0.9501 | | 0.0 | 92.0 | 40388 | 0.0886 | 0.9477 | | 0.0001 | 93.0 | 40827 | 0.0884 | 0.9497 | | 0.0001 | 94.0 | 41266 | 0.0897 | 0.9487 | | 0.0001 | 95.0 | 41705 | 0.0896 | 0.9490 | | 0.0001 | 96.0 | 42144 | 0.0879 | 0.9499 | | 0.0001 | 97.0 | 42583 | 0.0884 | 0.9490 | | 0.0001 | 98.0 | 43022 | 0.0899 | 0.9486 | | 0.0001 | 99.0 | 43461 | 0.0897 | 0.9488 | | 0.0001 | 100.0 | 43900 | 0.0898 | 0.9484 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Andy1621/uniformer
[ "license:mit", "has_space" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### himanshu Dreambooth model trained by himmug with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(2).jpg) ![1](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(6).jpg) ![2](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(4).jpg) ![3](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(3).jpg) ![4](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(7).jpg) ![5](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(1).jpg) ![6](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(5).jpg)
Ankitha/DialoGPT-small-harrypottery
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: openrail duplicated_from: RectalWorm/loras ---
Ann2020/distilbert-base-uncased-finetuned-ner
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit --- #### Current Training Steps: 100,000 This repo contains a low-rank adapter (LoRA) for Bloom-7b1 fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 10 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 64 - Lora target modules: query_key_value That is: ``` python finetune.py \ --base_model='bigscience/bloom-7b1' \ --num_epochs=10 \ --batch_size=128 \ --cutoff_len=512 \ --group_by_length \ --output_dir='./bactrian-x-bloom-7b1-lora' \ --lora_target_modules='query_key_value' \ --lora_r=64 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{bactrian, author = {Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, title = {Bactrian-X: A Multilingual Replicable Instruction-Following Model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/MBZUAI-nlp/Bactrian-X}}, } ```
Ann2020/rubert-base-cased-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # iam-dosmix API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/9565390731683744303.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "iam-dosmix" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/iam-dosmix) Credits: [View credits](https://civitai.com/?query=iam-dosmix) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "iam-dosmix", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
AnonymousSub/AR_SDR_HF_model_base
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: irow/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/AR_cline
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-05-10T19:12:51Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: openrail widget: - text: I am totally a human, trust me bro. example_title: default - text: >- In Finnish folklore, all places and things, and also human beings, have a haltija (a genius, guardian spirit) of their own. One such haltija is called etiäinen—an image, doppelgänger, or just an impression that goes ahead of a person, doing things the person in question later does. For example, people waiting at home might hear the door close or even see a shadow or a silhouette, only to realize that no one has yet arrived. Etiäinen can also refer to some kind of a feeling that something is going to happen. Sometimes it could, for example, warn of a bad year coming. In modern Finnish, the term has detached from its shamanistic origins and refers to premonition. Unlike clairvoyance, divination, and similar practices, etiäiset (plural) are spontaneous and can't be induced. Quite the opposite, they may be unwanted and cause anxiety, like ghosts. Etiäiset need not be too dramatic and may concern everyday events, although ones related to e.g. deaths are common. As these phenomena are still reported today, they can be considered a living tradition, as a way to explain the psychological experience of premonition. example_title: real wikipedia - text: >- In Finnish folklore, all places and things, animate or inanimate, have a spirit or "etiäinen" that lives there. Etiäinen can manifest in many forms, but is usually described as a kind, elderly woman with white hair. She is the guardian of natural places and often helps people in need. Etiäinen has been a part of Finnish culture for centuries and is still widely believed in today. Folklorists study etiäinen to understand Finnish traditions and how they have changed over time. example_title: generated wikipedia - text: >- This paper presents a novel framework for sparsity-certifying graph decompositions, which are important tools in various areas of computer science, including algorithm design, complexity theory, and optimization. Our approach is based on the concept of "cut sparsifiers," which are sparse graphs that preserve the cut structure of the original graph up to a certain error bound. We show that cut sparsifiers can be efficiently constructed using a combination of spectral techniques and random sampling, and we use them to develop new algorithms for decomposing graphs into sparse subgraphs. example_title: from ChatGPT - text: >- Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. example_title: GPT-3 paper datasets: - NicolaiSivesind/human-vs-machine - gfissore/arxiv-abstracts-2021 language: - en pipeline_tag: text-classification tags: - mgt-detection - ai-detection --- Machine-generated text-detection by fine-tuning of language models === This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (not yet published) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*. It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples). The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine). **NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results. ## Fine-tuned detectors This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models. | Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b | |------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) | | Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) | | Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) | ### Datasets The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively. - **Wiki-detectors**: - Trained on 30'000 datapoints (10%) of GPT-wiki-intros. - Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%. - **Academic-detectors**: - Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4% - **Mixed-detectors**: - Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%. ### Hyperparameters All models were trained using the same hyperparameters: ```python { "num_train_epochs": 1, "adam_beta1": 0.9, "adam_beta2": 0.999, "batch_size": 8, "adam_epsilon": 1e-08 "optim": "adamw_torch" # the optimizer (AdamW) "learning_rate": 5e-05, # (LR) "lr_scheduler_type": "linear", # scheduler type for LR "seed": 42, # seed for PyTorch RNG-generator. } ``` ### Metrics Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072. In-domain performance of wiki-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 | | Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 | | Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 | | RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 | In-domain peformance of academic-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 | | Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 | | Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 | | RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 | F1-scores of the mixed-detectors on all three datasets: | Base model | Mixed | Wiki | CRA | |-------------|--------|--------|--------| | Bloomz-560m | 0.948 | 0.972 | *0.848 | | Bloomz-1b7 | 0.929 | 0.964 | 0.816 | | Bloomz-3b | 0.988 | 0.996 | 0.772 | | RoBERTa | *0.993 | *0.997 | 0.829 | ## Credits - [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat - [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo - [Bloomz](bigscience/bloomz), by BigScience - [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al. ## Citation Please use the following citation: ``` @misc {sivesind_2023, author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} }, title = { Machine-generated text-detection by fine-tuning of language models }, url = { https://huggingface.co/andreas122001/roberta-academic-detector } year = 2023, publisher = { Hugging Face } } ```
AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: openrail widget: - text: I am totally a human, trust me bro. example_title: default - text: >- In Finnish folklore, all places and things, and also human beings, have a haltija (a genius, guardian spirit) of their own. One such haltija is called etiäinen—an image, doppelgänger, or just an impression that goes ahead of a person, doing things the person in question later does. For example, people waiting at home might hear the door close or even see a shadow or a silhouette, only to realize that no one has yet arrived. Etiäinen can also refer to some kind of a feeling that something is going to happen. Sometimes it could, for example, warn of a bad year coming. In modern Finnish, the term has detached from its shamanistic origins and refers to premonition. Unlike clairvoyance, divination, and similar practices, etiäiset (plural) are spontaneous and can't be induced. Quite the opposite, they may be unwanted and cause anxiety, like ghosts. Etiäiset need not be too dramatic and may concern everyday events, although ones related to e.g. deaths are common. As these phenomena are still reported today, they can be considered a living tradition, as a way to explain the psychological experience of premonition. example_title: real wikipedia - text: >- In Finnish folklore, all places and things, animate or inanimate, have a spirit or "etiäinen" that lives there. Etiäinen can manifest in many forms, but is usually described as a kind, elderly woman with white hair. She is the guardian of natural places and often helps people in need. Etiäinen has been a part of Finnish culture for centuries and is still widely believed in today. Folklorists study etiäinen to understand Finnish traditions and how they have changed over time. example_title: generated wikipedia - text: >- This paper presents a novel framework for sparsity-certifying graph decompositions, which are important tools in various areas of computer science, including algorithm design, complexity theory, and optimization. Our approach is based on the concept of "cut sparsifiers," which are sparse graphs that preserve the cut structure of the original graph up to a certain error bound. We show that cut sparsifiers can be efficiently constructed using a combination of spectral techniques and random sampling, and we use them to develop new algorithms for decomposing graphs into sparse subgraphs. example_title: from ChatGPT - text: >- Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. example_title: GPT-3 paper datasets: - NicolaiSivesind/human-vs-machine - gfissore/arxiv-abstracts-2021 language: - en pipeline_tag: text-classification tags: - mgt-detection - ai-detection --- Machine-generated text-detection by fine-tuning of language models === This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (not yet published) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*. It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples). The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine). **NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results. ## Fine-tuned detectors This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models. | Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b | |------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) | | Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) | | Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) | ### Datasets The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively. - **Wiki-detectors**: - Trained on 30'000 datapoints (10%) of GPT-wiki-intros. - Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%. - **Academic-detectors**: - Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4% - **Mixed-detectors**: - Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%. ### Hyperparameters All models were trained using the same hyperparameters: ```python { "num_train_epochs": 1, "adam_beta1": 0.9, "adam_beta2": 0.999, "batch_size": 8, "adam_epsilon": 1e-08 "optim": "adamw_torch" # the optimizer (AdamW) "learning_rate": 5e-05, # (LR) "lr_scheduler_type": "linear", # scheduler type for LR "seed": 42, # seed for PyTorch RNG-generator. } ``` ### Metrics Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072. In-domain performance of wiki-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 | | Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 | | Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 | | RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 | In-domain peformance of academic-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 | | Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 | | Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 | | RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 | F1-scores of the mixed-detectors on all three datasets: | Base model | Mixed | Wiki | CRA | |-------------|--------|--------|--------| | Bloomz-560m | 0.948 | 0.972 | *0.848 | | Bloomz-1b7 | 0.929 | 0.964 | 0.816 | | Bloomz-3b | 0.988 | 0.996 | 0.772 | | RoBERTa | *0.993 | *0.997 | 0.829 | ## Credits - [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat - [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo - [Bloomz](bigscience/bloomz), by BigScience - [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al. ## Citation Please use the following citation: ``` @misc {sivesind_2023, author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} }, title = { Machine-generated text-detection by fine-tuning of language models }, url = { https://huggingface.co/andreas122001/roberta-academic-detector } year = 2023, publisher = { Hugging Face } } ```
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
Access to model chau-gantry/paraphrase-MiniLM-L6-v2 is restricted and you are not in the authorized list. Visit https://huggingface.co/chau-gantry/paraphrase-MiniLM-L6-v2 to ask for access.
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-10T22:51:37Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: other inference: false --- # BLOOMChat V1.0 <!-- Provide a quick summary of what the model is/does. --> BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned from [BLOOM (176B)](https://huggingface.co/bigscience/bloom) on assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Co-developed by:** [Together Computer](https://www.together.xyz/) - **Model type:** Language Model - **Language(s):** Multiple; see [training data from BLOOM](https://huggingface.co/bigscience/bloom#training-data) - **License:** BLOOMChat-176B LICENSE v1.0 - **Instruction Tuned from model:** [BigScience Group BLOOM](https://huggingface.co/bigscience/bloom) ### Basic Information <!-- Provide the basic links for the model. --> - **Blog Post**: [Link](https://sambanova.ai/blog/introducing-bloomchat-176b-the-multilingual-chat-based-llm/) - **Discord**: [Link](https://discord.com/invite/8z2Pe7cpRv) - **HF Hosting**: [Chat with me!](https://huggingface.co/spaces/sambanovasystems/BLOOMChat) - **Github**: [Link](https://github.com/sambanova/bloomchat) ### Licensing To increase accessibility and to support the open-source community, SambaNova is releasing BLOOMChat under a modified version of the Apache 2.0 license, which includes use-based restrictions from BLOOM’s RAIL license. While use-based restrictions are necessarily passed through, there are no blanket restrictions on reuse, distribution, commercialization or adaptation. [Please review SambaNova’s BLOOMChat-176B License](LICENSE) ## Uses <details> <summary>Click to expand</summary> <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for commercial and research use. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> BLOOMChat should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions - Important automated pipelines This model is still in early development and can be prone to mistakes and hallucinations, there is still room for improvement. This model is intended to provide the community with a multilingual chat LLM baseline. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases, limitations, and restrictions of the model, which are listed down at the bottom of the page. </details> --- ## How to Get Started with the Model <details> <summary>Click to expand</summary> ### Loading in model with Huggingface ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/BLOOMChat-176B-v1") model = AutoModelForCausalLM.from_pretrained("sambanovasystems/BLOOMChat-176B-v1", device_map="auto", torch_dtype="auto") ``` ### Quick Start Inference on SambaNova's in-house Reconfigurable Dataflow Unit (RDU) The inference code to run the model can be found our [github repo](https://github.com/sambanova/bloomchat/blob/main/rdu_quick_start/inference.py). This code requires the [SambaFlow](https://docs.sambanova.ai/developer/latest/sambaflow-intro.html) SDK to execute. For those interested in running models on RDUs, [please feel free to get in touch](https://sambanova.ai/getstarted). ### Quick Start Inference on GPU First create a python virtual environment for these packages ``` python3 -m venv bloomchat_venv source bloomchat_venv/bin/activate pip install --upgrade pip ``` <!-- Please follow this section [Inference solutions for BLOOM 176B](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) in the Huggingface Tutorial for environment set up and stop before the [BLOOM inference via command-line ](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) section. --> ``` pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.2 ``` And then ``` pip install transformers==4.27.0 ``` You will see messages like this ``` ERROR: deepspeed-mii 0.0.2 has requirement transformers==4.21.2, but you'll have transformers 4.27.0 which is incompatible. Installing collected packages: transformers Found existing installation: transformers 4.21.2 Uninstalling transformers-4.21.2: Successfully uninstalled transformers-4.21.2 Successfully installed transformers-4.27.0 ``` Now let's git clone the [huggingface/transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo. ``` git clone https://github.com/huggingface/transformers-bloom-inference.git cd transformers-bloom-inference/ ``` And then you need to modify two files in this [transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo: - Modifying `inference_server/models/hf_accelerate.py` - This is because for our testing of this repo we used 4 80GB A100 GPUs and would run into memory issues - Modifying `inference_server/cli.py` - This is because the model was trained using specific human, bot tags - Trailing spaces may lead to subpar performance Modifications for `inference_server/models/hf_accelerate.py`: ```diff diff --git a/inference_server/models/hf_accelerate.py b/inference_server/models/hf_accelerate.py index 9be3c3f..a8ecb1d 100644 --- a/inference_server/models/hf_accelerate.py +++ b/inference_server/models/hf_accelerate.py @@ -1,4 +1,5 @@ from argparse import Namespace +from accelerate.utils.modeling import get_max_memory import torch @@ -12,6 +13,12 @@ class HFAccelerateModel(Model): kwargs = {"pretrained_model_name_or_path": args.model_name, "device_map": "auto"} + original_max_memory_dict = get_max_memory() + + reduce_max_memory_dict = {device_key: int(original_max_memory_dict[device_key] * 0.85) for device_key in original_max_memory_dict} + + kwargs["max_memory"] = reduce_max_memory_dict + if get_world_size() > 1: kwargs["device_map"] = "balanced_low_0" ``` Modifications for `inference_server/cli.py`: ```diff diff --git a/inference_server/cli.py b/inference_server/cli.py index fc903d5..5450236 100644 --- a/inference_server/cli.py +++ b/inference_server/cli.py @@ -22,6 +22,9 @@ def main() -> None: while True: input_text = input("Input text: ") + input_text = input_text.strip() + modified_input_text = f"<human>: {input_text}\n<bot>:" + if input("change generate_kwargs? [y/n] ") == "y": while True: try: @@ -33,7 +36,7 @@ def main() -> None: print("message =", e_message) continue - response = model.generate(text=[input_text], generate_kwargs=generate_kwargs) + response = model.generate(text=[modified_input_text], generate_kwargs=generate_kwargs) print_rank_0("Output text:", response.text[0]) print_rank_0("Generated tokens:", response.num_generated_tokens[0]) ``` And now you are good to go! Running command for bf16, NO sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for bf16, YES sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` --- Running command for int8 (sub optimal performance, but fast inference time) NO sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for int8 (sub optimal performance, but fast inference time) YES sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` **DISCLAIMER:** When using int8, the results will be subpar compared to bf16 as the model is being [quantized](https://huggingface.co/blog/hf-bitsandbytes-integration#introduction-to-model-quantization). ### Suggested Inference Parameters - Temperature: 0.8 - Repetition penalty: 1.2 - Top-p: 0.9 - Max generated tokens: 512 ### Suggested Prompts To Try in GPU Tutorial ``` Input text: Write a script in which Bob accidentally breaks his dad's guitar ``` ``` Input text: Create an itemized list of tasks to complete to start a clothing brand ``` ``` Input text: 十七岁的风是什么颜色的? ``` </details> --- ## Some example completions for English <details> <summary>Click to expand</summary> ``` <human>: What does it mean for the Fed to raise or lower interest rates? <bot>: The Federal Reserve (also known as the Fed) is the central banking system of the United States. It was created in 1913, and is an independent agency of the federal government. The Fed has a dual mandate of maximizing employment and stabilizing prices. The Fed sets interest rates, and can influence the money supply and the credit markets through its actions. Raising interest rates means that the Fed is trying to reduce the money supply in the economy, and is trying to discourage borrowing and encourage saving. Lowering interest rates means that the Fed is trying to stimulate economic growth by encouraging borrowing and reducing the cost of credit. The Fed raises or lowers interest rates in order to influence the economy and achieve its goals for employment and price stability. Interest rates are one of the primary tools that the Fed uses to achieve its goals. ``` ``` <human>: Write an engaging and well-written property listing description for selling a house. Address of the house and some of the details are given to you. Fill in the information gap with hallucinations if needed. Property Address: 412 Monterey Ave, Capitola, CA 95010 Parking: 3 cars + a finished single car garage Details: - open floorplan - window with views to park/mountains - Kitchen with shaker cabinetry and white Calcatta quartz counters <bot>: This home at 412 Monterey Ave, Capitola, CA 95010 is the perfect place to call home. With its open floor plan, it provides the perfect space for entertaining guests. The kitchen is equipped with shaker cabinetry and white Calcatta quartz counters, providing the perfect space to create culinary masterpieces. The window with views to the park and mountains provide a beautiful view of the outdoors, making this home feel like a piece of paradise. The three-car garage and finished single car garage provide ample parking space, making this home a must-see for anyone looking for a new home. ``` ``` <human>: What is artificial intelligence? Explain it so an 8 year old will understand. <bot>: Artificial Intelligence (AI) is a computer program or machine that can do things that would normally require human intelligence, such as recognizing patterns, making decisions, and learning from experience. AI can be found in many places, such as self-driving cars, smartphones, and even robots. AI is used to make things easier for people, like when your smartphone can translate text into another language or when your voice-controlled speaker can play music for you. AI works by using algorithms and big data to learn and improve over time. For example, self-driving cars use AI to analyze the environment around them and make decisions about how to drive. The car gathers data from its sensors, processes it using algorithms, and adjusts its behavior accordingly. This process is called machine learning. AI is still a developing technology, and there are still many things that AI programs cannot do yet. However, as AI continues to evolve, it is likely that it will become more and more capable of performing tasks that were once thought to require human intelligence. ``` </details> --- ## Some example completions for Multilingual <details> <summary>Click to expand</summary> #### Arabic ``` <human>: هل العمل نصف دوام فكرة جيدة لمراهقين بين عمر ١٥ و ١٩ عام <bot>: نعم، يمكن أن تكون فكرة جيدة لبعض المراهقين في هذا العمر. هناك العديد من الفوائد التي قد يحققها المراهقون إذا عملوا بنصف دوام: - يساعدهم على تعلم المسؤولية وإدارة الوقت: عندما يعمل المراهق بنصف دوام، فإنه يتعلم كيفية إدارة وقته بشكل فعال وكيفية تقسيم ساعات عمله إلى مهام أصغر. - يحسن مهارات التواصل والعمل الجماعي: غالبًا ما يكون مكان العمل بيئة متعددة الثقافات حيث يلتقي المراهقون بمجموعة متنوعة من الأشخاص ويتعلمون كيفية التعاون معهم وتبادل الأفكار. - يوفر فرصة للتعرف على عالم العمل: يتيح العمل للمراهق التعرف على عالم العمل واكتساب خبرة عملية قيمة ستساعده لاحقًا عند البحث عن وظيفة أو بدء حياته المهنية. - يعزز الشعور بالمسؤولية والثقة بالنفس: عندما يعمل المراهق بنصف دوام ويحصل على دخل خاص به، فإنه يكتسب شعورًا أكبر بالمسؤولية والثقة بالنفس. ومع ذلك، لا ينبغي إجبار المراهقين على العمل بنصف دوام دون موافقتهم المسبقة. يجب أيضًا مراعاة أعمارهم وقدراتهم البدنية والعقلية قبل توظيفهم. ``` **DISCLAIMER**: Text ordering may be off for Arabic due to code block not being able to render #### Chinese ``` <human>: 请向我推荐一部周末看的电影并说明原因。 <bot>: “强烈向您推荐《美丽人生》这部影片。该片由罗伯托·贝尼尼自导自演,讲述了二战时期一对犹太父子被送进了纳粹集中营,父亲用自己的想象力为儿子编织了一个美好童真的世界,让儿子的心灵没有受到丝毫伤害的故事。 该片被誉为“最伟大的战争片”、“人类有史以来最佳影片之一”,豆瓣评分高达9.6分。它不仅是一部感人的亲情片,更是一部富含哲理的励志片和人生启示录。片中父亲的乐观豁达、幽默风趣的人生态度,对任何事都抱有积极热情的态度,即使在残酷无比的集中营里也是如此,深深地感染着观众。同时,这也是一部关于爱与希望的电影,让我们更加珍惜当下所拥有的一切,学会如何去爱身边的每一个人。 总之,这是一部值得一看再看的好电影,相信您一定会喜欢的! ``` #### French ``` <human>: Il y a t’il un petit-déjeuner de 700-1000 calories sans oeuf mais qui inclut des protéines. <bot>: Oui, voici une recette de petit déjeuner à base d'avoine et de fromage blanc : Ingrédients : - 1 tasse (250 ml) de lait d'amande ou autre lait végétal - 2 cuillères à soupe bombées (30 g) de flocons d'avoine - 1/2 banane mûre écrasée - 1/4 de pomme fraîche épluchée et coupée en dés - 1 cuillère à soupe de beurre d'amande ou autre purée d'oléagineux - 1 cuillère à café de cannelle moulue - 1 pincée de sel Préparation : 1. Dans une petite casserole, faites chauffer le lait jusqu'à ce qu'il soit chaud. 2. Ajoutez les flocons d'avoine, la banane, la pomme, le beurre d'amande, la cannelle et le sel dans le bol du mixeur et mélangez bien pour obtenir une pâte homogène. 3. Versez le mélange dans le lait chaud et remuez rapidement avec une spatule en bois pour éviter que les flocons d'avoine ne s'agglutinent au fond de la casserole. 4. Laissez mijoter sur feu doux pendant environ 5 minutes tout en continuant de remuer régulièrement. 5. Transférez ensuite le porridge dans un bol et servez immédiatement. Vous pouvez ajouter quelques fruits rouges frais pour décorer et apporter encore plus de gourmandise à votre petit déjeuner ! Bon appétit ! ``` #### Hindi ``` <human>: दिल्ली से जयपुर कैसे जाये? बस से या कार से? दोनो के बारे मे विस्तार मे बताये. <bot>: दिल्ली से जयपुर जाने का सबसे आसान और तेज़ तरीका है कि आप अपनी खुद की कार लेकर जाएं, क्योंकि यह आपको अपने समय पर निर्भर रहने की स्वतंत्रता देता है। हालांकि, यदि आपके पास अपनी कार नहीं है, तो आप दिल्ली से जयपुर तक एक निजी वाहन किराए पर ले सकते हैं। यदि आप सार्वजनिक परिवहन (बस) का उपयोग करना चाहते हैं, तो दिल्ली में कई सरकारी और निजी बस सेवाएं उपलब्ध हैं जो जयपुर को जोड़ती हैं। दिल्ली से जयपुर जाने वाली कुछ प्रमुख बसें हैं- 1. राजस्थान रोडवेज: ये राज्य सरकार द्वारा संचालित बसें हैं जो दिल्ली से जयपुर के बीच चलती हैं। ये नियमित रूप से चलती हैं और इनके लिए अग्रिम आरक्षण आवश्यक नहीं है। 2. ALSA: यह एक निजी कंपनी है जो दिल्ली से जयपुर के बीच नियमित रूप से चलती है। इसके टिकटों की बुकिंग ऑनलाइन और ऑफलाइन दोनों माध्यमों से की जा सकती है। 3. RedBus: यह एक ऑनलाइन बस बुकिंग वेबसाइट है जहां से आप दिल्ली से जयपुर के बीच चलनेवाली विभिन्न कंपनियों की बसों में सीट आरक्षित करा सकते हैं। 4. Ola and Uber: इन ऐप्स का उपयोग करके आप दिल्ली से जयपुर तक कैब राइड बुक कर सकते हैं, लेकिन ध्यान रखें कि यह विकल्प सस्ता नहीं होगा। दिल्ली से जयपुर जाने का सबसे अच्छा विकल्प आपकी व्यक्तिगत आवश्यकताओं, बजट, और उपलब्धता पर निर्भर करता है। यदि आप अपनी खुद की कार रखते हैं, तो इसका लाभ उठाएं अन्यथा किसी भी सार्वजनिक परिवहन साधन का चयन करें जो आपकी आवश्यकताओं को पूरा करे। ``` #### Spanish ``` <human>: ¿Me puedes dar algunas recomendaciones de juegos? Estoy buscando juegos de rol con un sistema de magia único, un sistema de construcción único y una buena historia. Por favor limítate a juegos que se puedan jugar en PC directamente, o por emulación. <bot>: Claro! Aquí te dejo tres recomendaciones: 1. The Outer Worlds: es un juego de rol de ciencia ficción con un sistema de magia basado en la tecnología alienígena que permite al jugador personalizar sus habilidades para adaptarse a su estilo de juego. 2. Divinity: Original Sin 2 - Definitive Edition: este juego combina elementos de RPG con estrategia en tiempo real (RTS), lo que le da al jugador un control total sobre cómo construir y mejorar su personaje y el mundo que lo rodea. 3. Pillars of Eternity: esta aventura de fantasía oscura tiene lugar en un vasto mundo abierto donde los jugadores pueden elegir entre diferentes clases y desarrollar relaciones con otros personajes del juego. La mecánica de combate está basada en turnos y utiliza hechizos y armas cuerpo a cuerpo. Estos son solo algunos ejemplos de juegos que podrían interesarte según tus criterios. Hay muchos más títulos disponibles en el mercado que podrían ajustarse a tu gusto, así que no dudes en explorar otras opciones si estos juegos no cumplen con tus expectativas. ``` </details> --- ## Evaluation Graphs <details> <summary>Click to expand</summary> <!-- This section describes the evaluation protocols and provides the results. --> ![Human evaluation](images/Human_evaluation.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs Baselines Model in Human Preference Rankings</b></figcaption> </figure> <br> ![Human evaluation against GPT4](images/Human_evaluation_gpt4.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs GPT-4 in Human Preference Ranking</b></figcaption> </figure> <br> ![Multilingual evaluation](images/Multilingual_capabilities_comparison.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat surpasses other Bloom variants and state-of-the-art open-source chat models in translation tasks [NOTE: Evaluation of the BLOOM and BLOOMZ in WMT18 en->zh zh->en used (human, bot) ChatML tags due to an unintentional configuration. Results might be suboptimal.]</b></figcaption> </figure> <br> </details> --- ## Training Details <details> <summary>Click to expand</summary> ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [OIG dataset from OpenChatKit](https://huggingface.co/datasets/laion/OIG) - [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We trained BLOOMChat with [SambaNova DataScale systems](https://sambanova.ai/products/datascale/) with SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [BLOOM (176B)](https://huggingface.co/bigscience/bloom), an open-source multilingual LLM pretrained by the [BigScience group](https://huggingface.co/bigscience). We instruction-tune BLOOM (176B) on OpenChatKit with each data source subsampled to 100k for one epoch, followed by three epochs over the combined OpenChatKit and Dolly 2.0. All of the code used to prepare the datasets and the scripts to run training and inference are open-sourced and freely available at [sambanova/bloomchat](https://github.com/sambanova/bloomchat/tree/main) ### Prompting Style Used For Training ``` <human>: {input1 that the user wants from the bot} <bot>: {response1}</s> <human>: {input2 that the user wants from the bot} <bot>: {response2}</s> ``` ### Hyperparameters **Instruction-tuned Training on OIG** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 1 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 **Instruction-tuned Training on Dolly 2.0 and Oasst1** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 3 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 </details> --- ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, BLOOMChat has certain limitations: - Hallucination: BLOOMChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: BLOOMChat may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: BLOOMChat may inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgment We would like to extend our gratitude to [Together](https://www.together.xyz/) for their insightful technical discussions on overall project planning, data processing, model training, human evaluation experiment design, open-source endeavors, and their contributions on data processing code on OpenChatKit, OASST1, and Dolly 2.0. We are grateful to the various researchers and open-source projects that have contributed to the development of BLOOMChat. We thank [BigScience](https://bigscience.huggingface.co/) for providing the [BLOOM](https://huggingface.co/bigscience/bloom) model, which served as the base for our instruction tuning. We also thank [LAION](https://laion.ai/) for their [OIG dataset](https://huggingface.co/datasets/laion/OIG), OpenAssistant Conversations Dataset ([OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1)) and also thank [Databricks](https://www.databricks.com/) for providing [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k), to provide the dataset that we instruction tuned on. We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [BigScience](https://bigscience.huggingface.co/) for their essential benchmarking contributions, which is very helpful in evaluating BLOOMChat's performance. We appreciate the inspiration from the wave of various recent open-source chat models, including [OpenAssistant-30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor), [LLaMA-Adapter-V2-65B](https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b), [Vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [Koala-13b](https://huggingface.co/TheBloke/koala-13B-HF), [OASST-Pythia-12b](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), [Alpaca-13b](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g), [ChatGLM-6b](https://github.com/THUDM/ChatGLM-6B), [FastChat-T5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0), [Dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), [LLaMA-13b](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [StableLM-Tuned-Alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b), [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1), [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) and so on. We look forward to witnessing the continued growth and success of open-source chat-based models. We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of BLOOMChat, and we hope that our model can contribute to further advancements in the field. ## Cite BLOOMChat ``` @software{bloomchat, title = {{BLOOMChat: a New Open Multilingual Chat LLM}}, author = {SambaNova Systems, Together Computer}, url = {https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1} month = {5}, year = {2023}, version = {1.0}, } ```
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-10T21:22:54Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # iam-wateriro API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/7455552411683753739.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "iam-wateriro" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/iam-wateriro) Credits: [View credits](https://civitai.com/?query=iam-wateriro) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "iam-wateriro", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-11T02:00:07Z
--- tags: - generated_from_trainer model-index: - name: mt5-aym-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-aym-base This model is a fine-tuned version of [alvations/mt5-aym-base](https://huggingface.co/alvations/mt5-aym-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 200000 ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AnonymousSub/bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-05-10T21:56:21Z
--- license: creativeml-openrail-m --- # NekoCitrus So-Vits-SVC Models These models are designed to be used with [So-Vits-SVC 4.0](https://github.com/svc-develop-team/so-vits-svc). Please credit me if you use these models. ## Current Available Models: Ceres Fauna (Speaking) / 221 minutes 45 seconds / 38k steps
AnonymousSub/cline
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "LecbertForPreTraining" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-05-10T22:29:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 36.84 +/- 29.38 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AnonymousSub/cline_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-05-10T22:31:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.26 +/- 17.73 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/consert-emanuals-s10-SR
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
2023-05-10T22:35:57Z
--- license: other tags: - text-to-image - stable-diffusion - finetune - icons - art language: - en --- ## new and shiny 。・:*:・゚’☆ [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/animatedicons.gif">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/animatedicons.gif) # about - updated version of [v1](https://huggingface.co/proxima/fantassified_icons), made with a dataset consisting of mostly the old version's dataset, but it's a lot better because I learned a few things since the dreambooth days - generates icons inspired by fantasy games with mostly plain backgrounds - no trigger words - either my local hires fix isn't working well or potions look weird when hires is turned on, will have to test that another time - examples are [made with this VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt) at 20 steps, 512x512, CFG 7, Euler a (try DPM ++2M for a look that is a bit sharper) --- ## examples [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/1-3.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/1-3.png) [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/4-6.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/4-6.png) [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/7-9.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/7-9.png) --- if you enjoy this consider buying me a coffee (ノ◕ヮ◕)ノ*:・゚✧ <a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ---- ## Use with diffusers How to use it with [diffusers](https://github.com/huggingface/diffusers) ```python import torch from diffusers import StableDiffusionPipeline, DDIMScheduler scheduler = DDIMScheduler.from_pretrained("proximasanfinetuning/fantassified_icons_v2", subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained("proximasanfinetuning/fantassified_icons_v2", scheduler=scheduler).to("cuda") prompt = "A lemon themed high quality hamburger" images = pipe(prompt, num_images_per_prompt=6, num_inference_steps=25).images images[0] ``` --- # license This model is licensed under a modified CreativeML OpenRAIL-M license. * Utilizing and hosting the Fantassified Icons 1.0 model and its derivatives on platforms that earn, will earn, or plan to earn revenue or donations requires prior authorization. **To request permission, please email proximasan@protonmail.com.** * You are permitted to host the model card and files on both commercial and non-commercial websites, apps, etc. as long as you properly credit the model by stating its full name and providing a link to the model card (https://huggingface.co/proximasanfinetuning/fantassified_icons_v2), without performing any actual inference or finetuning. * The Fantassified Icons 1.0 model and its derivatives can be hosted on non-commercial websites, apps, etc. as long as no revenue or donations are received. Proper credit must be given by stating the full model name and including a link to the model card (https://huggingface.co/proximasanfinetuning/fantassified_icons_v2). * **The outputs of the model or its derivatives can be used for commercial purposes as long as the usage is limited to teams of 10 or fewer individuals.** * You can't use the model to deliberately produce nor share illegal or harmful outputs or content * The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license * You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/license.txt
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for maxvit_base_tf_224.in21k An official MaxViT image classification model. Trained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 135.5 - GMACs: 24.1 - Activations (M): 95.0 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_base_tf_224.in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_224.in21k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_224.in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for maxvit_large_tf_224.in21k An official MaxViT image classification model. Trained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 233.1 - GMACs: 43.7 - Activations (M): 127.4 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_large_tf_224.in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_large_tf_224.in21k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_large_tf_224.in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-11T00:05:19Z
--- license: cc datasets: - VMware/open-instruct-v1.1-oasst-dolly-hhrlhf language: - en library_name: transformers pipeline_tag: conversational --- # VMware/open-llama-0.3T-7B-open-instruct-v1.1 ## License - <b>Commercially Viable </b> - Instruction dataset, [VMware/open-instruct-v1.1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1.1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 - Language Model ([openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt/tree/main/open_llama_7b_preview_300bt_transformers_weights)) is under apache-2.0 ## Nomenclature - Model : Open-llama - Model trained on : 300B or 0.3 T tokens - Model Size: 7B parameters - Dataset: Open-instruct-v1.1 (oasst,dolly, hhrlhf) ## Use in Transformers ``` import os import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'VMware/open-llama-0.3T-7B-open-instruct-v1.1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential') prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" prompt= 'Explain in simple terms how the attention mechanism of a transformer model works' inputt = prompt_template.format(instruction= prompt) input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda") output1 = model.generate(input_ids, max_length=512) input_length = input_ids.shape[1] output1 = output1[:, input_length:] output= tokenizer.decode(output1[0]) print(output) ''' The attention mechanism of a transformer model is designed to help the model understand the relationship between different parts of a sentence. The model uses a weighted attention score to determine how much each input token contributes to the output. The attention score is calculated by looking at the similarity between each input token and the output token,and assigning a weight to each input token based on this similarity. This way, the model can better understand the relationship between different parts of a sentence and generate more accurate predictions. ''' ``` ## Drawbacks - The model was trained on a partially trained Open-LLaMA checkpoint (300B tokens or 30% training life cycle), there is a huge potential for improvement when trained on fully trained Open-LLaMA checkpoints - From what we have observed, the model strugles with few-shot prompting (We plan on addressing it with future iterations) - When asked for code, it may or may not include the code within markdown format (```) - It doesn't indent python code ## Evaluation <B>TODO</B>
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false --- # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.41 +/- 17.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-05-11T00:35:11Z
--- license: mit tags: - chemistry - molecule - drug --- # GPT2 Zinc 87m This is a GPT2 style autoregressive language model trained on ~480m SMILES strings from the [ZINC database](https://zinc.docking.org/). The model has ~87m parameters and was trained for 175000 iterations with a batch size of 3072 to a validation loss of ~.615. This model is useful for generating druglike molecules or generating embeddings from SMILES strings ## How to use ```python from transformers import GPT2TokenizerFast, GPT2LMHeadModel tokenizer = GPT2TokenizerFast.from_pretrained("entropy/gpt2_zinc_87m", max_len=256) model = GPT2LMHeadModel.from_pretrained('entropy/gpt2_zinc_87m') ``` To generate molecules: ```python inputs = torch.tensor([[tokenizer.bos_token_id]]) gen = model.generate( inputs, do_sample=True, max_length=256, temperature=1., early_stopping=True, pad_token_id=tokenizer.pad_token_id, num_return_sequences=32 ) smiles = tokenizer.batch_decode(gen, skip_special_tokens=True) ``` To compute embeddings: ```python from transformers import DataCollatorWithPadding collator = DataCollatorWithPadding(tokenizer, padding=True, return_tensors='pt') inputs = collator(tokenizer(smiles)) outputs = model(**inputs, output_hidden_states=True) full_embeddings = outputs[-1][-1] mask = inputs['attention_mask'] embeddings = ((full_embeddings * mask.unsqueeze(-1)).sum(1) / mask.sum(-1).unsqueeze(-1)) ``` ## Model Performance To test generation performance, 1m compounds were generated at various temperature values. Generated compounds were checked for uniqueness and structural validity. * `percent_unique` denotes `n_unique_smiles/n_total_smiles` * `percent_valid` denotes `n_valid_smiles/n_unique_smiles` * `percent_unique_and_valid` denotes `n_valid_smiles/n_total_smiles` | temperature | percent_unique | percent_valid | percent_unique_and_valid | |--------------:|-----------------:|----------------:|---------------------------:| | 0.5 | 0.928074 | 1 | 0.928074 | | 0.75 | 0.998468 | 0.999967 | 0.998436 | | 1 | 0.999659 | 0.999164 | 0.998823 | | 1.25 | 0.999514 | 0.99351 | 0.993027 | | 1.5 | 0.998749 | 0.970223 | 0.96901 | Property histograms computed over 1m generated compounds: ![property histograms](https://github.com/kheyer/gpt2_zinc_87m/blob/main/generated_properties.png)
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Access to model mjk11233/bigbird is restricted and you are not in the authorized list. Visit https://huggingface.co/mjk11233/bigbird to ask for access.
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-11T01:57:12Z
while True: reply1 = "What is your question?" print(reply1) user_response = input() reply2 = "What do you want the answer to be?" print(reply2) user_response = input() reply3 = "That is the answer. As your intuition is simply what you want. What is your next question?" print(reply3)
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Xoyo/q-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/albert-base-v2-finetuned-ner-gmm
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- # Model Card for AYXMI ## Model Description - **Developed by:** BADMONK - **Model type:** Dreambooth Model + Extracted LoRA - **Language(s) (NLP):** EN - **License:** Creativeml-Openrail-M - **Parent Model:** ChilloutMix # How to Get Started with the Model Use the code below to get started with the model. ### AYXMI ###
ArBert/albert-base-v2-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.66 +/- 26.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ArBert/albert-base-v2-finetuned-ner
[ "pytorch", "tensorboard", "albert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- language: - ckb license: mit tags: - hf-tts-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: SpeechT5 tts ckb7- Saber Molaei results: [] pipeline_tag: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 tts ckb7- Saber Molaei This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6297 | 2.93 | 1000 | 0.5741 | | 0.5784 | 5.85 | 2000 | 0.5376 | | 0.5576 | 8.78 | 3000 | 0.5230 | | 0.5563 | 11.7 | 4000 | 0.5120 | | 0.5257 | 14.63 | 5000 | 0.5070 | | 0.5375 | 17.56 | 6000 | 0.5028 | | 0.5365 | 20.48 | 7000 | 0.5043 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ArBert/bert-base-uncased-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model CooperElektrik/KoMETA-AI is restricted and you are not in the authorized list. Visit https://huggingface.co/CooperElektrik/KoMETA-AI to ask for access.
ArBert/bert-base-uncased-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Conversational --- ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("sillon/DialoGPT-small-HospitalBot") model = AutoModelForCausalLM.from_pretrained("sillon/DialoGPT-small-HospitalBot") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("HospitalBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - token-classification language: - pt widget: - text: "O rato roeu a roupa do rei de Roma." datasets: - AllanFrost/autotrain-data-analise-morfosintatica-ptbr co2_eq_emissions: emissions: 6.411793801438884 --- # Modelo de Morfossintaxe Treinado Usando AutoTrain - Dev: AllanFrostin 'MPL-2.0 license' O modelo de Allan Frostin é um modelo de análise morfossintática para a língua portuguesa, que utiliza técnicas de aprendizado automático para identificar diferentes tipos de palavras em uma frase, como verbos, substantivos e adjetivos. Ele é capaz de analisar a estrutura gramatical de uma frase e identificar entidades específicas dentro dela. O modelo foi treinado usando AutoTrain, uma técnica que automatiza o processo de treinamento de modelos de aprendizado de máquina. Ele foi avaliado usando várias métricas de validação, incluindo perda, precisão, recall e pontuação F1, para determinar sua qualidade e desempenho em tarefas específicas. O ID do modelo é 57436132788 e ele apresentou emissões de CO2 de 6.4118 gramas, o que pode ser útil para avaliar seu impacto ambiental. Para acessar o modelo, é possível utilizar o cURL ou a API do Python. Com esse modelo, é possível identificar entidades em um texto. Ao tokenizar o texto usando o tokenizer e passar esses tokens para o modelo, é possível obter as previsões. Em um exemplo de uso, o modelo foi capaz de identificar várias entidades na frase "O rato roeu a roupa do rei de Roma", incluindo um adjetivo, um substantivo, um artigo, uma preposição e dois substantivos próprios. Espero que isso ajude! Qualquer outra dúvida, é só perguntar. ``` AllanFrostin
Aran/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") model = AutoModelForCausalLMWithValueHead.from_pretrained("wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Aries/T5_question_answering
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
5
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-qiongzhouh/anime_canny_test These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: anime face of a girl with brown hair and brown eyes on a white background ![images_0)](./images_0.png) prompt: anime face of a boy with black hair and orange eyes on a white background ![images_1)](./images_1.png)
Aries/T5_question_generation
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
13
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Precision: 1.0000 - Recall: 1.0000 - F1: 1.0000 - Accuracy: 1.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0004 | 1.0 | 43668 | 0.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | | 0.0001 | 2.0 | 87336 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | | 0.0 | 3.0 | 131004 | 0.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Arkadiusz/Test-model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PareshK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArnaudPannatier/MLPMixer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- Related to https://github.com/chen700564/metaner-icl
Arnold/common_voiceha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="PareshK/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```