modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 00:42:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
553 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 00:42:38
card
stringlengths
11
1.01M
teppei727/bert_woco
teppei727
2023-07-10T10:30:30Z
110
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "arxiv:1702.00992", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-12T05:46:20Z
--- language: - en pipeline_tag: text-classification --- # bert-woco Finetuned BERT model for 13-class classification, without a discourse relation (Expansion.Conjunction). It was introduced in the paper: [Automatic Slide Generation Using Discourse Relations](https://link.springer.com/chapter/10.1007/978-3-031-36336-8_61) and first released in this repository. This model is uncased: it does not make a difference between english and English. In our proposed method in this [paper](https://link.springer.com/chapter/10.1007/978-3-031-36336-8_61), we used this model for the classification of discourse relation between the SECOND and THIRD sentence and beyond in summarized sentences. The model is NOT used between the FIRST and SECOND sentences. # Descliption This model can classify the relation between the sentence pair of input. Now we are working on preparing the Model card. Please wait for a few days. The model trained from [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the dataset published in the paper: [Automatic Prediction of Discourse Connectives](https://arxiv.org/abs/1702.00992). The dataset to make this model is based on English Wikipedia data and has 20 labels. However, this model will classify into 13 labels. This is because the 20-class data set was restructured to 14 classes to suit our research objective of "automatic slide generation. This distribution is shown below. This model doesn't contain the discourse relation: Expansion.Conjunction. Because this discourse relation assumes that there is a relation between one previous sentence pair. So it is inappropriate to apply this discourse relation between the first and second sentences.
sarahflan/xlm-roberta-base-finetuned-panx-de
sarahflan
2023-07-10T09:49:53Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-07T14:27:30Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.863220155832338 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2578 | 1.0 | 525 | 0.1642 | 0.8263 | | 0.1289 | 2.0 | 1050 | 0.1397 | 0.8420 | | 0.0819 | 3.0 | 1575 | 0.1352 | 0.8632 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
justinpinkney/falcon-7b
justinpinkney
2023-07-10T09:49:02Z
13
0
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2101.00027", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-07T14:25:17Z
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: false license: apache-2.0 duplicated_from: tiiuae/falcon-7b --- # 🚀 Falcon-7B **Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.** *Paper coming soon* 😊. 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B? * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions. ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B. # Model Card for Falcon-7B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0. ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)). | **Data source** | **Fraction** | **Tokens** | **Sources** | |--------------------|--------------|------------|-----------------------------------| | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl | | Books | 7% | 110B | | | Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews | | Code | 3% | 45B | | | RefinedWeb-French | 3% | 45B | massive web crawl | | Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ### Training Procedure Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | 2304 | 30B tokens ramp-up | #### Speeds, Sizes, Times Training happened in early March 2023 and took about two weeks. ## Evaluation *Paper coming soon*. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. ## Technical Specifications ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances. #### Software Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B is made available under the Apache 2.0 license. ## Contact falconllm@tii.ae
ArimaKana38/alpaca-cmkl
ArimaKana38
2023-07-10T09:33:02Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-08T11:17:05Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
dangvansam/whisper-small-vi-finetuned-750h
dangvansam
2023-07-10T09:27:05Z
75
2
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "vi", "dataset:vivos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-08T03:57:40Z
--- language: - vi license: apache-2.0 tags: - whisper-event datasets: - vivos metrics: - wer model-index: - name: Whisper Small Vietnamese results: # - task: # type: automatic-speech-recognition # name: Automatic Speech Recognition # dataset: # name: mozilla-foundation/common_voice_11_0 # type: mozilla-foundation/common_voice_11_0 # config: vi # split: test # metrics: # - type: wer # value: 16.63 # name: WER # - type: cer # value: 7.74 # name: CER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: vivos type: vivos split: test metrics: - type: wer value: 13.4 name: WER # - type: cer # value: 3.67 # name: CER ---
Phips/Taxi-v3
Phips
2023-07-10T09:16:11Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T09:02:12Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Phips/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
crisU8/bert-finetuned-ner-clinical-plncmm-large-23
crisU8
2023-07-10T09:09:41Z
121
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T09:04:36Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-23 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2372 - Precision: 0.7614 - Recall: 0.8233 - F1: 0.7911 - Accuracy: 0.9322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 20 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.611 | 1.0 | 686 | 0.2341 | 0.7001 | 0.7997 | 0.7466 | 0.9248 | | 0.2088 | 2.0 | 1372 | 0.2449 | 0.7406 | 0.8227 | 0.7795 | 0.9294 | | 0.1203 | 3.0 | 2058 | 0.2372 | 0.7614 | 0.8233 | 0.7911 | 0.9322 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Shamalka/Robin_Gibb
Shamalka
2023-07-10T09:04:14Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-07-10T09:04:14Z
--- license: bigscience-bloom-rail-1.0 ---
soBeauty/xlm-roberta-base-09072023-revised_2
soBeauty
2023-07-10T08:59:49Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-09T15:31:18Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlm-roberta-base-09072023-revised_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-09072023-revised_2 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.5577 - Loss: 2.2632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.725 | 3.85 | 500 | 0.4944 | 2.4300 | | 2.5409 | 7.69 | 1000 | 0.5577 | 2.2632 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Aeala/Chronoboros-33b-4bit
Aeala
2023-07-10T08:48:19Z
5
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T06:52:52Z
4-bit GPTQ quantization of the [chronoboros-33b](https://huggingface.co/Henk717/chronoboros-33B) merge.
danbrown/checkpoints
danbrown
2023-07-10T08:43:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T08:20:33Z
--- license: creativeml-openrail-m --- This is a collection of Stable Diffusion model checkpoints, just like my other loras collection. I may be listing it with more details here as I add the models. The models here can be third-party checkpoints, or personal experiments.
SpringYung/falcon_with_10latex
SpringYung
2023-07-10T08:42:21Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-10T08:41:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
ycros/chronoboros-33B-GGML
ycros
2023-07-10T08:30:36Z
0
0
null
[ "license:other", "region:us" ]
null
2023-07-10T03:09:57Z
--- license: other --- Quantizations of https://huggingface.co/Henk717/chronoboros-33B - see that repo for more information.
SpringYung/dolly_with_10examples
SpringYung
2023-07-10T08:30:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-10T08:30:04Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-skew-sz50-v1
hw2942
2023-07-10T08:19:13Z
87
0
transformers
[ "transformers", "pytorch", "tensorboard", "longformer", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T08:00:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-skew-sz50-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-skew-sz50-v1 This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 32 | 0.8551 | 0.5 | | No log | 2.0 | 64 | 0.6943 | 0.5 | | No log | 3.0 | 96 | 0.7137 | 0.5 | | No log | 4.0 | 128 | 0.7191 | 0.5 | | No log | 5.0 | 160 | 0.6997 | 0.5 | | No log | 6.0 | 192 | 0.7076 | 0.5 | | No log | 7.0 | 224 | 0.7121 | 0.5 | | No log | 8.0 | 256 | 0.6938 | 0.5 | | No log | 9.0 | 288 | 0.6941 | 0.5 | | No log | 10.0 | 320 | 0.6932 | 0.5 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
crisU8/bert-finetuned-ner-clinical-plncmm-large-15
crisU8
2023-07-10T08:15:12Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T07:57:20Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-15 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2339 - Precision: 0.7526 - Recall: 0.8282 - F1: 0.7886 - Accuracy: 0.9309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 429 | 0.2466 | 0.6954 | 0.7958 | 0.7423 | 0.9223 | | 0.5736 | 2.0 | 858 | 0.2380 | 0.7354 | 0.8178 | 0.7744 | 0.9264 | | 0.1845 | 3.0 | 1287 | 0.2339 | 0.7526 | 0.8282 | 0.7886 | 0.9309 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
komo-dono/matsuoka
komo-dono
2023-07-10T08:14:09Z
0
0
null
[ "region:us" ]
null
2023-07-10T08:12:42Z
--- license: openrail language: - ja tags: - music matsuoka 500 epoch
MMG/mlm-spanish-roberta-base
MMG
2023-07-10T08:01:29Z
120
1
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - es widget: - text: "MMG se dedica a la <mask> artificial." --- # mlm-spanish-roberta-base This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training. To test the quality of the resulting model we evaluate it over the [GLUES](https://github.com/dccuchile/GLUES) benchmark for Spanish NLU. The results are the following: | Task | Score (metric) | |:-----------------------:|:---------------------:| | XNLI | 71.99 (accuracy) | | Paraphrasing | 74.85 (accuracy) | | NER | 85.34 (F1) | | POS | 97.49 (accuracy) | | Dependency Parsing | 85.14/81.08 (UAS/LAS) | | Document Classification | 93.00 (accuracy) |
lloydchang/wongstein-vide-noir
lloydchang
2023-07-10T07:49:17Z
207
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text-generation-inference", "en", "dataset:amazon_us_reviews", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T07:43:16Z
--- license: creativeml-openrail-m datasets: - amazon_us_reviews language: - en tags: - text-generation-inference ---
HamZurger/ppo-LunarLander-v2
HamZurger
2023-07-10T07:37:02Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T07:36:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.22 +/- 11.20 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
zhundred/Taxi-v3
zhundred
2023-07-10T07:27:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T07:26:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="zhundred/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
digiplay/2.5DSET_new1a25d_FFver
digiplay
2023-07-10T07:08:47Z
343
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-14T11:36:49Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/18634?modelVersionId=24184 Sample images I made : Recommend to use "8k" keywords in the beginning . ![24dsetFF - 2023-06-15T004719.892.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/vP1jYSTKbvQcIvkkwSwl8.png) ![25Dff - 2023-06-15T005247.064.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/r64GaHr0iC1rc_-soar-f.png) ![25DSET FF2023-06-15T005611.046.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/U4sqwFmSbxJxZ1vgGAf8f.png)
crisU8/bert-finetuned-ner-clinical-plncmm-large-11
crisU8
2023-07-10T07:00:16Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T06:38:46Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-11 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2334 - Precision: 0.7534 - Recall: 0.8216 - F1: 0.7860 - Accuracy: 0.9328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 429 | 0.2492 | 0.6813 | 0.7849 | 0.7294 | 0.9211 | | 0.6251 | 2.0 | 858 | 0.2420 | 0.7467 | 0.8189 | 0.7812 | 0.9288 | | 0.1942 | 3.0 | 1287 | 0.2334 | 0.7534 | 0.8216 | 0.7860 | 0.9328 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
fredrickrsq/Chat_Suzumiya_GLM2LoRA
fredrickrsq
2023-07-10T06:58:41Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-10T06:55:14Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
hoanghoavienvo/bert-large-uncased-detect-depression-stage-one
hoanghoavienvo
2023-07-10T06:55:05Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T22:16:38Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-detect-depression-stage-one results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-detect-depression-stage-one This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7875 - eval_accuracy: 0.752 - eval_f1: 0.8092 - eval_runtime: 112.7187 - eval_samples_per_second: 8.872 - eval_steps_per_second: 2.218 - epoch: 3.0 - step: 4506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
b-koopman/whisper-base-finetuned-gtzan
b-koopman
2023-07-10T06:51:15Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-10T06:33:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-base-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.3910 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8923 | 1.0 | 113 | 0.7722 | 0.74 | | 0.8088 | 2.0 | 226 | 0.6883 | 0.78 | | 0.3561 | 3.0 | 339 | 0.7117 | 0.78 | | 0.0312 | 4.0 | 452 | 0.4188 | 0.88 | | 0.0108 | 5.0 | 565 | 0.3910 | 0.88 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
VitCon/ppo-LunarLander-v2
VitCon
2023-07-10T06:39:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T06:39:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.56 +/- 22.20 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
charqican/nominal-groups-recognition-bert-base-spanish-wwm-cased
charqican
2023-07-10T06:37:54Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "dataset:charqican/spanish_nominal_groups_conll2003", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T06:30:03Z
--- language: - es tags: - generated_from_trainer datasets: - charqican/spanish_nominal_groups_conll2003 model-index: - name: nominal-groups-recognition-bert-base-spanish-wwm-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nominal-groups-recognition-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the charqican/spanish_nominal_groups_conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.2772 - Ng Precision: 0.7140 - Ng Recall: 0.7695 - Ng F1: 0.7407 - Ng Number: 3198 - Overall Precision: 0.7140 - Overall Recall: 0.7695 - Overall F1: 0.7407 - Overall Accuracy: 0.8993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Ng Precision | Ng Recall | Ng F1 | Ng Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:------:|:---------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3988 | 1.0 | 228 | 0.2792 | 0.7108 | 0.7577 | 0.7335 | 3198 | 0.7108 | 0.7577 | 0.7335 | 0.8935 | | 0.2257 | 2.0 | 456 | 0.2772 | 0.7140 | 0.7695 | 0.7407 | 3198 | 0.7140 | 0.7695 | 0.7407 | 0.8993 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
UNIST-Eunchan/pegasus-x-booksum-chapter
UNIST-Eunchan
2023-07-10T06:32:23Z
88
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus_x", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-09T09:14:08Z
--- tags: - generated_from_trainer model-index: - name: pegasus-x-booksum-chapter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-x-booksum-chapter This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5993 {'rouge1': 0.23378525407555606, 'rouge2': 0.0362962105694859, 'rougeL': 0.13587636708639556, 'rougeLsum': 0.13593997471043634} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8377 | 0.67 | 200 | 2.7300 | | 2.7462 | 1.33 | 400 | 2.6613 | | 2.7148 | 2.0 | 600 | 2.6345 | | 2.6242 | 2.67 | 800 | 2.6249 | | 2.5971 | 3.33 | 1000 | 2.6150 | | 2.6103 | 4.0 | 1200 | 2.6092 | | 2.5763 | 4.67 | 1400 | 2.6083 | | 2.5737 | 5.33 | 1600 | 2.6035 | | 2.6252 | 6.0 | 1800 | 2.6007 | | 2.5402 | 6.67 | 2000 | 2.6004 | | 2.5278 | 7.33 | 2200 | 2.6007 | | 2.5173 | 8.0 | 2400 | 2.5993 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
marcioporto/distilbert-base-uncased-finetuned-cola
marcioporto
2023-07-10T06:15:48Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T04:56:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: marcioporto/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # marcioporto/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1960 - Validation Loss: 0.5374 - Train Matthews Correlation: 0.5132 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5148 | 0.4837 | 0.4509 | 0 | | 0.3222 | 0.4815 | 0.5046 | 1 | | 0.1960 | 0.5374 | 0.5132 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Arindam75/a2c-AntBulletEnv-v0
Arindam75
2023-07-10T06:13:29Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T07:30:39Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1735.43 +/- 93.64 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
crisU8/bert-finetuned-ner-clinical-plncmm-large-7
crisU8
2023-07-10T06:09:28Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T06:00:18Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-7 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2317 - Precision: 0.7503 - Recall: 0.8227 - F1: 0.7848 - Accuracy: 0.9326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 429 | 0.2497 | 0.6860 | 0.7794 | 0.7297 | 0.9201 | | 0.6187 | 2.0 | 858 | 0.2391 | 0.7384 | 0.8134 | 0.7741 | 0.9293 | | 0.1936 | 3.0 | 1287 | 0.2317 | 0.7503 | 0.8227 | 0.7848 | 0.9326 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-guten-mod-rm-refrences-1p7k
NasimB
2023-07-10T05:56:47Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T04:00:26Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-mod-rm-refrences-1p7k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-mod-rm-refrences-1p7k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6974 | 0.29 | 500 | 5.6415 | | 5.3331 | 0.58 | 1000 | 5.1970 | | 4.9805 | 0.88 | 1500 | 4.9464 | | 4.7094 | 1.17 | 2000 | 4.7978 | | 4.5465 | 1.46 | 2500 | 4.6746 | | 4.4438 | 1.75 | 3000 | 4.5714 | | 4.3256 | 2.04 | 3500 | 4.4890 | | 4.1252 | 2.34 | 4000 | 4.4453 | | 4.0923 | 2.63 | 4500 | 4.3874 | | 4.0485 | 2.92 | 5000 | 4.3318 | | 3.8592 | 3.21 | 5500 | 4.3258 | | 3.7904 | 3.5 | 6000 | 4.2931 | | 3.7755 | 3.79 | 6500 | 4.2598 | | 3.6816 | 4.09 | 7000 | 4.2575 | | 3.5062 | 4.38 | 7500 | 4.2557 | | 3.4984 | 4.67 | 8000 | 4.2391 | | 3.4904 | 4.96 | 8500 | 4.2253 | | 3.334 | 5.25 | 9000 | 4.2373 | | 3.3045 | 5.55 | 9500 | 4.2375 | | 3.3115 | 5.84 | 10000 | 4.2364 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
crisU8/bert-finetuned-ner-clinical-plncmm-large-6
crisU8
2023-07-10T05:48:10Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T05:30:54Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-6 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2564 - Precision: 0.7685 - Recall: 0.8364 - F1: 0.8011 - Accuracy: 0.9350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 429 | 0.2349 | 0.7107 | 0.8183 | 0.7607 | 0.9245 | | 0.3161 | 2.0 | 858 | 0.2470 | 0.7442 | 0.8238 | 0.7820 | 0.9271 | | 0.1608 | 3.0 | 1287 | 0.2427 | 0.7555 | 0.8244 | 0.7885 | 0.9329 | | 0.1088 | 4.0 | 1716 | 0.2564 | 0.7685 | 0.8364 | 0.8011 | 0.9350 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Sukmin/Reinforce-PixelCopter
Sukmin
2023-07-10T05:46:30Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T03:34:22Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 35.40 +/- 26.24 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
tyavika/Bert_CNN512LSTM256NoBid2
tyavika
2023-07-10T05:39:34Z
77
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-09T11:33:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Bert_CNN512LSTM256NoBid2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert_CNN512LSTM256NoBid2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.5349 | 1.0 | 3290 | 1.2728 | | 0.9147 | 2.0 | 6580 | 1.0476 | | 0.6235 | 3.0 | 9870 | 1.0559 | | 0.4168 | 4.0 | 13160 | 1.3049 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
WALIDALI/marimrev
WALIDALI
2023-07-10T05:32:46Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-10T05:29:09Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### marimrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
edgamer/q-FrozenLake-v1-4x4-noSlippery
edgamer
2023-07-10T05:29:25Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T05:29:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="edgamer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
smangrul/peft-lora-codegen-25-guanaco-v100-colab
smangrul
2023-07-10T05:11:58Z
9
4
peft
[ "peft", "tensorboard", "generated_from_trainer", "base_model:Salesforce/codegen25-7b-multi_P", "base_model:adapter:Salesforce/codegen25-7b-multi_P", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-08T10:24:14Z
--- license: apache-2.0 base_model: Salesforce/codegen25-7b-multi tags: - generated_from_trainer model-index: - name: peft-lora-codgen-25-guanaco-t4-colab results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-lora-codgen-25-guanaco-t4-colab This model is a fine-tuned version of [Salesforce/codegen25-7b-multi](https://huggingface.co/Salesforce/codegen25-7b-multi) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0.dev0 - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Sukmin/Reinforce-cartpole
Sukmin
2023-07-10T05:01:13Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T03:08:56Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rtyui123/ppo-LunarLander-v2
rtyui123
2023-07-10T04:59:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T04:59:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.78 +/- 23.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ericNguyen0132/roberta-large-first
ericNguyen0132
2023-07-10T04:54:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-08T08:35:51Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-large-first results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-first This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2025 - Accuracy: 0.8483 - F1: 0.9100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 469 | 0.3694 | 0.8783 | 0.9312 | | 0.3984 | 2.0 | 938 | 0.4488 | 0.87 | 0.9257 | | 0.3366 | 3.0 | 1407 | 0.5764 | 0.855 | 0.9141 | | 0.295 | 4.0 | 1876 | 0.7716 | 0.8533 | 0.9132 | | 0.1876 | 5.0 | 2345 | 0.8724 | 0.855 | 0.9148 | | 0.1378 | 6.0 | 2814 | 1.1717 | 0.825 | 0.8921 | | 0.0836 | 7.0 | 3283 | 1.0711 | 0.8367 | 0.9028 | | 0.043 | 8.0 | 3752 | 1.0807 | 0.8633 | 0.9202 | | 0.0402 | 9.0 | 4221 | 1.2285 | 0.8367 | 0.9024 | | 0.035 | 10.0 | 4690 | 1.2025 | 0.8483 | 0.9100 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Minggu/anizhchie2
Minggu
2023-07-10T04:51:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T04:48:59Z
--- license: creativeml-openrail-m ---
crisU8/bert-finetuned-ner-clinical-plncmm-large-1
crisU8
2023-07-10T04:42:21Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T04:30:54Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-plncmm-large-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-plncmm-large-1 This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2406 - Precision: 0.7503 - Recall: 0.8227 - F1: 0.7848 - Accuracy: 0.9318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3906 | 1.0 | 857 | 0.2271 | 0.7130 | 0.8101 | 0.7585 | 0.9253 | | 0.1758 | 2.0 | 1714 | 0.2378 | 0.7460 | 0.8222 | 0.7822 | 0.9290 | | 0.125 | 3.0 | 2571 | 0.2406 | 0.7503 | 0.8227 | 0.7848 | 0.9318 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Raj-Sanjay-Shah/babyLM_roberta_base_epoch_5
Raj-Sanjay-Shah
2023-07-10T04:40:28Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-10T04:15:35Z
--- license: cc-by-nc-sa-4.0 ---
cwiz/llama-7b-saiga-merged
cwiz
2023-07-10T04:26:24Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-09T18:56:19Z
--- license: apache-2.0 --- [Saiga](https://huggingface.co/IlyaGusev/saiga_7b_lora) merged with [LLaMa-7b](https://huggingface.co/decapoda-research/llama-7b-hf) for further finetuning.
casque/productDesign_eddiemauro20
casque
2023-07-10T04:26:09Z
0
2
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:54:05Z
--- license: creativeml-openrail-m ---
NiscR/Cartpole-v1
NiscR
2023-07-10T04:14:56Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T04:14:46Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
marcioporto/ppo-Huggy
marcioporto
2023-07-10T04:08:25Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-10T04:08:21Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: marcioporto/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
eugene-yang/colbertx-xlmr-large-tt-eng.rus
eugene-yang
2023-07-10T03:53:43Z
33
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "en", "zh", "arxiv:2201.08471", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-07-10T03:41:37Z
--- license: mit language: - en - zh task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - cross-language-retrieval --- Model trained by [Suraj Nair](https://srnair.netlify.app/). If you use the model, please cite our paper. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ```
casque/vectorArt_pulpVectorBeta
casque
2023-07-10T03:52:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:33:58Z
--- license: creativeml-openrail-m ---
eugene-yang/colbertx-xlmr-large-tt-eng.fas
eugene-yang
2023-07-10T03:47:02Z
32
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "en", "zh", "arxiv:2201.08471", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-07-10T03:41:30Z
--- license: mit language: - en - zh task_categories: - text-retrieval task_ids: - passage-retrieval - cross-language-retrieval --- Model trained by [Suraj Nair](https://srnair.netlify.app/). If you use the model, please cite our paper. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ```
biodatlab/MIReAD-Neuro-Contrastive
biodatlab
2023-07-10T03:33:52Z
9
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-08T06:21:36Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # MIReAD-Neuro-Contrastive This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('biodatlab/MIReAD-Neuro-Contrastive') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('biodatlab/MIReAD-Neuro-Contrastive') model = AutoModel.from_pretrained('biodatlab/MIReAD-Neuro-Contrastive') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=biodatlab/MIReAD-Neuro-Contrastive) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15616 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 8, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Honkware/Wizard-Vicuna-13B-Uncensored-SpQR
Honkware
2023-07-10T03:32:50Z
10
1
transformers
[ "transformers", "llama", "text-generation", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-09T21:10:01Z
--- license: other model-index: - name: Wizard-Vicuna-13B-Uncensored-SpQR results: - task: type: text-generation-inference name: Text Generation dataset: type: c4 name: C4 metrics: - type: perplexity value: 7.354 - task: type: text-generation-inference name: Text Generation dataset: type: wikitext2 name: WikiText-2 metrics: - type: perplexity value: 5.685 - task: type: text-generation-inference name: Text Generation dataset: type: ptb name: PTB metrics: - type: perplexity value: 20.822 pipeline_tag: text-generation --- # Wizard-Vicuna-13B-Uncensored-SpQR ## Overview This model is an SpQR 4-bit quantization of the original [Wizard-Vicuna-13B-Uncensored-HF](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF) ## Quantization Specifications - **Quantization**: 4-bit, group size of 16, per-channel with scale and zero-point of 3 bits. - **Outliers**: Threshold set at 0.2. - **Permutation Order**: `act_order`. - **Dampening**: Set at 1e0. - **Sampling**: 128 samples. - **Logging**: Via [Weights & Biases](https://wandb.ai/hampterbyte/Wizard-Vicuna-13B-Uncensored-SpQR/runs/95vcnhr8/overview). ## Evaluation Metrics The following perplexity scores were obtained on various datasets: | Dataset | Perplexity | |:---------:|:----------:| | c4 | 7.354 | | wikitext2 | 5.685 | | ptb | 20.822 |
casque/galaxy_gods
casque
2023-07-10T03:30:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:30:18Z
--- license: creativeml-openrail-m ---
casque/Colored_Icons_by_vizsumit
casque
2023-07-10T03:24:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:23:35Z
--- license: creativeml-openrail-m ---
casque/PastelVectorAi
casque
2023-07-10T03:18:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:17:48Z
--- license: creativeml-openrail-m ---
casque/penink
casque
2023-07-10T03:10:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T03:08:34Z
--- license: creativeml-openrail-m ---
edgamer/ppo-LunarLander-v2
edgamer
2023-07-10T02:51:29Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T01:11:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 281.47 +/- 22.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nomsgadded/textual_inversion_van_gogh
nomsgadded
2023-07-10T02:47:57Z
48
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-10T01:24:49Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - nomsgadded/textual_inversion_van_gogh These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
Raj-Sanjay-Shah/babyLM_roberta_base_epoch_20
Raj-Sanjay-Shah
2023-07-10T02:44:58Z
133
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-10T02:24:40Z
--- license: cc-by-nc-nd-4.0 ---
harrycools/q-FrozenLake-v1-4x4-noSlippery
harrycools
2023-07-10T02:40:33Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T02:40:31Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="harrycools/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
eatrero/distilbert-base-uncased-finetuned-emotion
eatrero
2023-07-10T02:22:31Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T19:15:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9234507249341903 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.9235 - F1: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3302 | 0.9005 | 0.8959 | | No log | 2.0 | 500 | 0.2222 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Chickenfish/Monica_stable_v1
Chickenfish
2023-07-10T02:19:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T02:17:36Z
--- license: creativeml-openrail-m ---
JBJoyce/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
JBJoyce
2023-07-10T01:51:39Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-10T00:20:46Z
--- license: bsd-3-clause tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.4626 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6832 | 1.0 | 113 | 0.6136 | 0.79 | | 0.3528 | 2.0 | 226 | 0.6350 | 0.77 | | 0.178 | 3.0 | 339 | 0.7414 | 0.8 | | 0.142 | 4.0 | 452 | 0.5234 | 0.84 | | 0.1209 | 5.0 | 565 | 0.5176 | 0.88 | | 0.0004 | 6.0 | 678 | 0.4160 | 0.88 | | 0.0002 | 7.0 | 791 | 0.4798 | 0.9 | | 0.0002 | 8.0 | 904 | 0.4693 | 0.89 | | 0.1201 | 9.0 | 1017 | 0.4636 | 0.9 | | 0.0002 | 10.0 | 1130 | 0.4626 | 0.9 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
EleutherAI/pythia-70m-deduped-v0
EleutherAI
2023-07-10T01:32:46Z
933
8
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-01T00:24:53Z
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-70M-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-70M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-70M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-70M-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-70M-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-70M-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
EleutherAI/pythia-2.8b-deduped-v0
EleutherAI
2023-07-10T01:32:13Z
880
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-23T17:41:01Z
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-2.8B-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-2.8B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-2.8B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-2.8B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
boostcamp-5th-nlp07/qlora-koalpaca-polyglot-12.8b-fast
boostcamp-5th-nlp07
2023-07-10T01:32:09Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-10T01:32:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
EleutherAI/pythia-1b-deduped-v0
EleutherAI
2023-07-10T01:32:03Z
853
10
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-18T03:08:13Z
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-1B-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
EleutherAI/pythia-410m-deduped-v0
EleutherAI
2023-07-10T01:31:39Z
860
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-01T00:48:44Z
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-410M-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-410M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-410M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-410M-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-410M-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-410M-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
havingAfish/CDA
havingAfish
2023-07-10T01:00:15Z
0
0
null
[ "dataset:movie_rationales", "region:us" ]
null
2023-07-10T00:59:26Z
--- datasets: - movie_rationales ---
codenlighten/lora_pile_70b
codenlighten
2023-07-10T00:58:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-09T23:13:23Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
gawoon/boston-demo
gawoon
2023-07-10T00:47:40Z
0
0
keras
[ "keras", "tensorboard", "tf-keras", "region:us" ]
null
2023-07-09T23:34:38Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | RMSprop | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | 100 | | jit_compile | False | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | rho | 0.9 | | momentum | 0.0 | | epsilon | 1e-07 | | centered | False | | training_precision | float32 |
Hedayat-Abrishami/Reinforce-2
Hedayat-Abrishami
2023-07-10T00:32:49Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T00:32:36Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 69.30 +/- 55.81 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
crisU8/bert-finetuned-ner-clinical-trials-1
crisU8
2023-07-10T00:25:34Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T00:09:08Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-clinical-trials-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-clinical-trials-1 This model is a fine-tuned version of [lcampillos/roberta-es-clinical-trials-ner](https://huggingface.co/lcampillos/roberta-es-clinical-trials-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2968 - Precision: 0.7244 - Recall: 0.7673 - F1: 0.7452 - Accuracy: 0.9151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4642 | 1.0 | 502 | 0.3147 | 0.6348 | 0.7316 | 0.6798 | 0.8977 | | 0.248 | 2.0 | 1004 | 0.2774 | 0.7073 | 0.7667 | 0.7358 | 0.9142 | | 0.1922 | 3.0 | 1506 | 0.2844 | 0.7127 | 0.7678 | 0.7392 | 0.9132 | | 0.1588 | 4.0 | 2008 | 0.2968 | 0.7244 | 0.7673 | 0.7452 | 0.9151 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bochen0909/whisper-base-finetuned-gtzan
bochen0909
2023-07-10T00:17:07Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-09T23:48:39Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-base-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.9 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5279 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3629 | 1.0 | 75 | 1.2791 | 0.6 | | 0.6712 | 2.0 | 150 | 0.7613 | 0.75 | | 0.5613 | 3.0 | 225 | 0.6708 | 0.77 | | 0.2594 | 4.0 | 300 | 0.4979 | 0.86 | | 0.0944 | 5.0 | 375 | 0.5922 | 0.85 | | 0.1038 | 6.0 | 450 | 0.4702 | 0.89 | | 0.0077 | 7.0 | 525 | 0.7109 | 0.85 | | 0.0036 | 8.0 | 600 | 0.5821 | 0.87 | | 0.0049 | 9.0 | 675 | 0.5013 | 0.9 | | 0.0025 | 10.0 | 750 | 0.5279 | 0.9 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
LongshenOu/lyric-trans-en2zh
LongshenOu
2023-07-10T00:16:10Z
31
0
transformers
[ "transformers", "pytorch", "mbart", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2023-07-10T00:07:39Z
--- license: cc-by-nc-sa-4.0 ---
NasimB/gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k
NasimB
2023-07-10T00:11:09Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-09T22:15:33Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7049 | 0.3 | 500 | 5.6332 | | 5.3603 | 0.59 | 1000 | 5.2033 | | 5.0063 | 0.89 | 1500 | 4.9509 | | 4.7286 | 1.18 | 2000 | 4.7987 | | 4.5752 | 1.48 | 2500 | 4.6728 | | 4.4634 | 1.78 | 3000 | 4.5663 | | 4.3226 | 2.07 | 3500 | 4.4933 | | 4.1472 | 2.37 | 4000 | 4.4458 | | 4.1157 | 2.67 | 4500 | 4.3824 | | 4.0756 | 2.96 | 5000 | 4.3282 | | 3.8402 | 3.26 | 5500 | 4.3258 | | 3.8183 | 3.55 | 6000 | 4.2905 | | 3.7968 | 3.85 | 6500 | 4.2597 | | 3.6538 | 4.15 | 7000 | 4.2640 | | 3.5239 | 4.44 | 7500 | 4.2506 | | 3.5235 | 4.74 | 8000 | 4.2375 | | 3.4943 | 5.04 | 8500 | 4.2350 | | 3.3327 | 5.33 | 9000 | 4.2405 | | 3.3319 | 5.63 | 9500 | 4.2383 | | 3.3325 | 5.92 | 10000 | 4.2378 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
alexshengzhili/Llava-Graph-ocr-ft-on-instruct150k
alexshengzhili
2023-07-10T00:08:22Z
16
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-28T18:04:32Z
--- license: mit --- This model is obtained first 1. Feature alignment based on SciCap. The intermediate output is at this link: https://huggingface.co/alexshengzhili/llava-7bv0-mm-projector-ft-with-ocr-caption-prompted-paragraph 3. Instruction Tuning based on OG llava-provided paper
tbooy/ppo-Huggy
tbooy
2023-07-09T22:51:45Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-09T22:51:40Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: tbooy/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
skywalker7/Taxi-v3
skywalker7
2023-07-09T22:45:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T22:45:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="skywalker7/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NasimB/gpt2-concat-mod-datatsets-rarity-all-iorder-no-cut-repetition
NasimB
2023-07-09T22:29:37Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-09T20:40:54Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-mod-datatsets-rarity-all-iorder-no-cut-repetition results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-mod-datatsets-rarity-all-iorder-no-cut-repetition This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.2189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.734 | 0.3 | 500 | 5.7011 | | 5.3923 | 0.6 | 1000 | 5.2709 | | 5.0404 | 0.9 | 1500 | 5.0055 | | 4.7561 | 1.21 | 2000 | 4.8552 | | 4.6189 | 1.51 | 2500 | 4.7257 | | 4.5064 | 1.81 | 3000 | 4.6217 | | 4.3497 | 2.11 | 3500 | 4.5593 | | 4.1924 | 2.41 | 4000 | 4.5086 | | 4.1669 | 2.71 | 4500 | 4.4446 | | 4.102 | 3.01 | 5000 | 4.4099 | | 3.8642 | 3.32 | 5500 | 4.4021 | | 3.8619 | 3.62 | 6000 | 4.3641 | | 3.8392 | 3.92 | 6500 | 4.3356 | | 3.6347 | 4.22 | 7000 | 4.3605 | | 3.5759 | 4.52 | 7500 | 4.3424 | | 3.5613 | 4.82 | 8000 | 4.3281 | | 3.4782 | 5.12 | 8500 | 4.3392 | | 3.3739 | 5.42 | 9000 | 4.3409 | | 3.3737 | 5.73 | 9500 | 4.3415 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
vinesmsuic/magicbrush-jul7
vinesmsuic
2023-07-09T22:04:54Z
1,298
9
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-08T02:50:03Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- diffuser port of https://huggingface.co/osunlp/InstructPix2Pix-MagicBrush. diffuser version of `MagicBrush-epoch-52-step-4999.ckpt` ```python from PIL import Image, ImageOps import requests import torch from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler from PIL import Image url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" def download_image(url): image = Image.open(requests.get(url, stream=True).raw) image = ImageOps.exif_transpose(image) image = image.convert("RGB") return image image = download_image(url) prompt = "make the mountains snowy" class MagicBrush(): def __init__(self, weight="vinesmsuic/magicbrush-jul7"): self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( weight, torch_dtype=torch.float16 ).to("cuda") self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config) def infer_one_image(self, src_image, instruct_prompt, seed): generator = torch.manual_seed(seed) image = self.pipe(instruct_prompt, image=src_image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7, generator=generator).images[0] return image model = MagicBrush() image_output = model.infer_one_image(image, prompt, 42) image_output ``` ![](https://i.imgur.com/rL3zEkh.png) ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
VK246/IC_ver3b_coco_swin_gpt2_2
VK246
2023-07-09T21:56:25Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-09T18:30:26Z
--- tags: - generated_from_trainer datasets: - coco metrics: - rouge - bleu model-index: - name: IC_ver3b_coco_swin_gpt2_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver3b_coco_swin_gpt2_2 This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.8483 - Rouge1: 41.3447 - Rouge2: 15.7294 - Rougel: 37.6633 - Rougelsum: 37.6744 - Bleu: 9.4309 - Gen Len: 11.3368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:| | 1.2141 | 0.25 | 300 | 1.0093 | 35.2179 | 11.1228 | 32.1546 | 32.167 | 6.2018 | 11.3368 | | 1.0037 | 0.51 | 600 | 0.9600 | 36.4586 | 11.8379 | 33.324 | 33.3342 | 7.0081 | 11.3368 | | 0.9644 | 0.76 | 900 | 0.9303 | 38.5343 | 13.2266 | 35.2902 | 35.3055 | 7.539 | 11.3368 | | 0.9367 | 1.02 | 1200 | 0.9004 | 39.2182 | 13.7589 | 35.7747 | 35.7799 | 7.6492 | 11.3368 | | 0.8842 | 1.27 | 1500 | 0.8876 | 39.4537 | 14.1037 | 35.9758 | 35.9776 | 8.4067 | 11.3368 | | 0.86 | 1.53 | 1800 | 0.8758 | 40.4179 | 15.0774 | 37.0166 | 37.0401 | 8.8897 | 11.3368 | | 0.8465 | 1.78 | 2100 | 0.8665 | 40.4073 | 15.1125 | 36.9767 | 36.9877 | 8.9602 | 11.3368 | | 0.8421 | 2.04 | 2400 | 0.8592 | 40.62 | 15.2042 | 36.9224 | 36.9359 | 9.1313 | 11.3368 | | 0.8106 | 2.29 | 2700 | 0.8548 | 41.0356 | 15.399 | 37.4562 | 37.4635 | 9.2534 | 11.3368 | | 0.7963 | 2.54 | 3000 | 0.8521 | 41.1998 | 15.6442 | 37.6659 | 37.6682 | 9.4605 | 11.3368 | | 0.795 | 2.8 | 3300 | 0.8493 | 41.1215 | 15.581 | 37.4725 | 37.4978 | 9.5488 | 11.3368 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha
FFusion
2023-07-09T21:44:08Z
25
3
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "di.ffusion.ai", "art", "base model", "en", "doi:10.57967/hf/0855", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-06T06:47:14Z
--- license: creativeml-openrail-m language: - en pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image - di.ffusion.ai - art - base model library_name: diffusers widget: - text: >- a sprinkled donut sitting on top of a table, blender donut tutorial, colorful hyperrealism, everything is made of candy, hyperrealistic digital painting, covered in sprinkles and crumbs, vibrant colors hyper realism, colorful smoke explosion background example_title: Donut Fusion - text: >- a cup of coffee with a tree in it, surreal art, awesome great composition, surrealism!!!!, cafe in the clouds, perfectly realistic yet surreal, surreal realistic, floating trees, amazing composition, dream scenery art, whimsical surrealism, surreal composition, trending artistic art, surrealism art, surreal scene, surrealistic painting, surreal style, surreal illustration, dreamlike surrealism colorful smoke and fire coming out of it,explosion of data fragments,exploding background,realistic explosion,3d digital art 4k,fire and explosion,explosion,background explosion,cinema 4 d art,shattering,beeple. hyperrealism,explosion background,rendered in cinema 4 d,rendered in cinema4d,explosive background, example_title: Coffee Fusion - text: >- brightly colored headphones with a splash of paint and music notes, vibing to music, artistic illustration, stunning artwork, music is life, beautiful digital artwork, headphones on, listening to music, music poster, synesthesia, music in the air, listening to godly music, style hybrid mix of beeple, headphones, digital artwork 4 k, side profile artwork, no humans, planet, space, black background, cable, simple background, concept art, cinematic, dramatic, intricate details, dark lighting example_title: Headset Fusion - text: >- a group of three blocks with a picture of a boat in the middle of them, surreal 3 d render, 3 d epic illustrations, 3 d artistic render, inspired by Matthias Jung, environmental key art, erik johansson style, surreal concept art, alexander jansson style, cube portals, beeple masterpiece, 3 d render beeple, surrealistic digital artwork example_title: Digital Fusion --- ![FMMRrb81eZSVvvU6QyCCXO.jpg](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/sEe_VVbvClLEbGnznYZH-.jpeg) 📣 **FFUSION AI - 768 BaSE** Public alpha Release is Here! download the **`di.FFUSION.ai-v2.1-768-BaSE-alpha-preview.safetensors`** [here](https://huggingface.co/FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha/blob/main/di.FFUSION.ai-v2.1-768-BaSE-alpha-preview.safetensors). # 🚀 Model Overview: Unleashing the Power of Imagination! 🌠 Introducing FFUSION AI - a groundbreaking tool for image generation and transformation, crafted around the cutting-edge Latent Diffusion Model. Dive into the surreal world of FFUSION Ai, powered by Stable Diffusion 2.1, and let your favorite prompts transform into captivating works of art. Effortlessly weave your ideas with mesmerizing effects, immersing your audience in a world where imagination knows no bounds. **Developed by:** Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies **Model type:** Diffusion-based text-to-image generation model **Language(s):** English **License:** CreativeML Open RAIL++-M License # 🔬 Intended Use: From Research to Artistry 🎨 ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/6mxV9Vwqa0LIHsDBeQoSu.png) FFUSION AI is a multi-faceted tool that shines in various applications. Primarily envisioned for research, FFUSION AI has potential to: Examine the limitations and inherent biases in generative models. Unleash the artist within, aiding in creative processes or artistic endeavours. Reinvent educational or creative utilities with AI-driven innovations. Propel the research in the fascinating domain of generative models. However, it's crucial to note that certain uses of FFUSION AI are strictly prohibited, as outlined below. # 🚫 Forbidden Use: Setting Boundaries for Safe AI 🛑 We've borrowed the principles from the Stable Diffusion v2.1 model card, which apply equally to Fusion AI alpha, beta, and final releases. We strictly prohibit the use of this model for generating or spreading images intended to incite hostility or alienation. This includes content that is foreseeably disturbing, distressing, offensive, or stereotype-propagating. # # Out-of-Scope Use: Since this model isn't designed to create factual representations of people or events, such usage is deemed out-of-scope. # # Misuse and Malicious Use: Utilizing the model to create content that inflicts harm upon individuals is deemed misuse. This includes: - Generating content that belittles, dehumanizes, or otherwise harms individuals or their environments, cultures, religions, etc. - Deliberately promoting or disseminating discriminatory content or harmful stereotypes. - Impersonating individuals without consent. - Generating sexual content without viewer consent. - Spreading mis- and disinformation. - Illustrating extreme violence and gore. - Distributing copyrighted or licensed material against its usage terms. - Modifying copyrighted or licensed material against its usage terms. Our policy, adopted from the principles of the Stable Diffusion v2.1 model card, ensures the responsible use of Fusion AI beta and final releases. **We expressly prohibit the utilization of our model for generating or distributing images that might incite hostility or exclusion.** This includes: - Content that is distressing, offensive, or perpetuates harmful stereotypes. - Misuse or malicious use that harms individuals or communities, including creating demeaning or harmful representations, or promoting discriminatory content. - Using the model for impersonation without consent or creating non-consensual explicit content. - Generating or spreading mis- and disinformation, violent, gory imagery, or violating copyright terms. - # 🔭 Model Limitations and Bias: Acknowledging Imperfections 🌐 While our model leaps toward the future of AI-driven creativity, it's essential to recognize its current limitations: - The quest for perfect photorealism and creative surrealism continues. - Rendering legible text remains a challenge. - Even more complex tasks, such as depicting "A red cube on top of a blue sphere in the middle of the ocean in a desert" may pose difficulty (but still processable). - Human figures, particularly faces, may not be accurately generated. # Version Releases We are excited to unveil the following versions: ## Version 512 Beta – LiTE, MiD BFG model variations: - FFUSION.ai-512-beta-BFG-build.0401.safetensors - FFUSION.ai-512-beta-LiTE-build.0201.safetensors - FFUSION.ai-512-beta-MiD-build.0401.safetensors ### Version 768 Alpha - BaSE, FUSION, FFUSION: BaSE and FUSION models will soon come with enhanced training capabilities including LoRa, LyCORIS, Dylora & Kohya-ss/sd-scripts. More information will be revealed upon release. - **di.FFUSION.ai-v2.1-768-BaSE-alpha-preview.safetensors** # FUSION AI Text Encoders: - **di.FFUSION.ai-tXe-FXAA:** Trained on "121361" images. Enhance your model's quality and sharpness using the pre-trained Unet. - **di.FFUSION.ai-tXe-fX:** Trained on "211924" images. Amplify your model's surrealism and effects. # Environmental Impact Our dedication to sustainable development is reflected in the model's carbon footprint. The CO2 emissions, calculated using the Machine Learning Impact calculator, stand at 124.95 kg for a total of 1190 hours of usage with an A100 PCIe 40GB GPU. **Hardware Type:** A100 PCIe 40GB **Hours used:** 1190 **Cloud Provider:** CoreWeave & Runpod (official partner) **Compute Region**: US Cyxtera Chicago Data Center - ORD1 / EU - CZ & EU - RO - Carbon Emitted (Power consumption x Time x Carbon produced based on the location of the power grid): 124.95 kg of CO2 emitted. - Power consumption x Time x Carbon Produced Based on the Local Power Grid: 250W x 1190h = 297.5 kWh x 0.42 kg eq. CO2/kWh = 124.95 kg eq. CO2 - Local Hardware Storage 4x16TB Raid5 WD Gold Optimizer: AdamW & Dadaptation **This model card was written by: Idle Stoev and is based on the Stability AI - Stable Diffusion 2.1 model card.** Models: [![FFusion-BaSE](https://img.shields.io/badge/2.1%20🤗%20Model-FFusion--BaSE-blue)](https://huggingface.co/FFusion/FFusion-BaSE) [![di.FFUSION.ai-v2.1-768-BaSE-alpha](https://img.shields.io/badge/🤗%20Model-di.FFUSION.ai--v2.1--768--BaSE--alpha-blue)](https://huggingface.co/FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha) [![di.ffusion.ai.Beta512](https://img.shields.io/badge/2.1%20🤗%20Model-di.ffusion.ai.Beta512-blue)](https://huggingface.co/FFusion/di.ffusion.ai.Beta512) [![FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1](https://img.shields.io/badge/2.1%20🤗%20Model-FFUSION.ai--Text--Encoder--LyCORIS--SD--2.1-blue)](https://huggingface.co/FFusion/FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1) Contact: [![Email](https://img.shields.io/badge/Email-di%40ffusion.ai-blue)](mailto:di@ffusion.ai)
aphi/Reinforce-Pixelcopter-PLE-v0
aphi
2023-07-09T21:41:42Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T21:41:36Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 50.60 +/- 41.93 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
pavankantharaju/ppo-LunarLander-v2
pavankantharaju
2023-07-09T21:39:58Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T21:39:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.66 +/- 17.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
0sunfire0/Moon_landing_01
0sunfire0
2023-07-09T21:31:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T21:30:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 291.64 +/- 18.53 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
FFusion/FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1
FFusion
2023-07-09T21:22:09Z
0
1
null
[ "di.ffusion.ai", "stable-diffusion", "LyCORIS", "LoRA", "en", "arxiv:1910.09700", "arxiv:2108.06098", "license:creativeml-openrail-m", "region:us" ]
null
2023-06-06T18:21:28Z
--- license: creativeml-openrail-m language: - en tags: - di.ffusion.ai - stable-diffusion - LyCORIS - LoRA --- # Model Card for di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/zcw9AUCSbanb61xe6pIUc.png) <!-- Provide a quick summary of what the model is/does. [Optional] --> di.FFUSION.ai-tXe-FXAA Trained on &#34;121361&#34; images. - **DOWNLOAD:** https://huggingface.co/FFusion/FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1/blob/main/di.FFUSION.ai-tXe-FXAA.safetensors Enhance your model&#39;s quality and sharpness using your own pre-trained Unet. The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99)) Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;} Large size due to Lyco CONV 256 ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/Ig1IOYZAyUrhpWIhdC6U-.png) ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/66eAHPc501sbQx35-B0Oo.png) This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying. Note: This is not the text encoder used in the official FFUSION AI model. # SAMPLES **Available also at https://civitai.com/models/83622** ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/agjJ--YR_k_Pbn8tOMsqr.png) For a1111 Install https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris Download di.FFUSION.ai-tXe-FXAA to /models/Lycoris Option1: Insert <lyco:di.FFUSION.ai-tXe-FXAA:1.0> to prompt No need to split Unet and Text Enc as its only TX encoder there. You can go up to 2x weights Option2: If you need it always ON (ex run a batch from txt file) then you can go to settings / Quicksettings list ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/N6M4-9eIkvi3nn3koh1fA.png) add sd_lyco restart and you should have a drop-down now 🤟 🥃 ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/e8ROXaN8jIaT9lu7tNRjD.png) # Table of Contents - [Model Card for di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents-1) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use [Optional]](#downstream-use-optional) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Speeds, Sizes, Times](#speeds-sizes-times) - [Evaluation](#evaluation) - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - [Testing Data](#testing-data) - [Factors](#factors) - [Metrics](#metrics) - [Results](#results) - [Model Examination](#model-examination) - [Environmental Impact](#environmental-impact) - [Technical Specifications [optional]](#technical-specifications-optional) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [Citation](#citation) - [Glossary [optional]](#glossary-optional) - [More Information [optional]](#more-information-optional) - [Model Card Authors [optional]](#model-card-authors-optional) - [Model Card Contact](#model-card-contact) - [How to Get Started with the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> di.FFUSION.ai-tXe-FXAA Trained on &#34;121361&#34; images. Enhance your model&#39;s quality and sharpness using your own pre-trained Unet. The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99)) Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;} Large size due to Lyco CONV 256 This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying. Note: This is not the text encoder used in the official FFUSION AI model. - **Developed by:** FFusion.ai - **Shared by [Optional]:** idle stoev - **Model type:** Language model - **Language(s) (NLP):** en - **License:** creativeml-openrail-m - **Parent Model:** More information needed - **Resources for more information:** More information needed # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99)) Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;} Large size due to Lyco CONV 256 # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Trained on &#34;121361&#34; images. ss_caption_tag_dropout_rate: &#34;0.0&#34;, ss_multires_noise_discount: &#34;0.3&#34;, ss_mixed_precision: &#34;bf16&#34;, ss_text_encoder_lr: &#34;1e-07&#34;, ss_keep_tokens: &#34;3&#34;, ss_network_args: &#34;{&#34;conv_dim&#34;: &#34;256&#34;, &#34;conv_alpha&#34;: &#34;256&#34;, &#34;algo&#34;: &#34;loha&#34;}&#34;, ss_caption_dropout_rate: &#34;0.02&#34;, ss_flip_aug: &#34;False&#34;, ss_learning_rate: &#34;2e-07&#34;, ss_sd_model_name: &#34;stabilityai/stable-diffusion-2-1-base&#34;, ss_max_grad_norm: &#34;1.0&#34;, ss_num_epochs: &#34;2&#34;, ss_gradient_checkpointing: &#34;False&#34;, ss_face_crop_aug_range: &#34;None&#34;, ss_epoch: &#34;2&#34;, ss_num_train_images: &#34;121361&#34;, ss_color_aug: &#34;False&#34;, ss_gradient_accumulation_steps: &#34;1&#34;, ss_total_batch_size: &#34;100&#34;, ss_prior_loss_weight: &#34;1.0&#34;, ss_training_comment: &#34;None&#34;, ss_network_dim: &#34;768&#34;, ss_output_name: &#34;FusionaMEGA1tX&#34;, ss_max_bucket_reso: &#34;1024&#34;, ss_network_alpha: &#34;768.0&#34;, ss_steps: &#34;2444&#34;, ss_shuffle_caption: &#34;True&#34;, ss_training_finished_at: &#34;1684158038.0763328&#34;, ss_min_bucket_reso: &#34;256&#34;, ss_noise_offset: &#34;0.09&#34;, ss_enable_bucket: &#34;True&#34;, ss_batch_size_per_device: &#34;20&#34;, ss_max_train_steps: &#34;2444&#34;, ss_network_module: &#34;lycoris.kohya&#34;, ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing &#34;{&#34;buckets&#34;: {&#34;0&#34;: {&#34;resolution&#34;: [192, 256], &#34;count&#34;: 1}, &#34;1&#34;: {&#34;resolution&#34;: [192, 320], &#34;count&#34;: 1}, &#34;2&#34;: {&#34;resolution&#34;: [256, 384], &#34;count&#34;: 1}, &#34;3&#34;: {&#34;resolution&#34;: [256, 512], &#34;count&#34;: 1}, &#34;4&#34;: {&#34;resolution&#34;: [384, 576], &#34;count&#34;: 2}, &#34;5&#34;: {&#34;resolution&#34;: [384, 640], &#34;count&#34;: 2}, &#34;6&#34;: {&#34;resolution&#34;: [384, 704], &#34;count&#34;: 1}, &#34;7&#34;: {&#34;resolution&#34;: [384, 1088], &#34;count&#34;: 15}, &#34;8&#34;: {&#34;resolution&#34;: [448, 448], &#34;count&#34;: 5}, &#34;9&#34;: {&#34;resolution&#34;: [448, 576], &#34;count&#34;: 1}, &#34;10&#34;: {&#34;resolution&#34;: [448, 640], &#34;count&#34;: 1}, &#34;11&#34;: {&#34;resolution&#34;: [448, 768], &#34;count&#34;: 1}, &#34;12&#34;: {&#34;resolution&#34;: [448, 832], &#34;count&#34;: 1}, &#34;13&#34;: {&#34;resolution&#34;: [448, 1088], &#34;count&#34;: 25}, &#34;14&#34;: {&#34;resolution&#34;: [448, 1216], &#34;count&#34;: 1}, &#34;15&#34;: {&#34;resolution&#34;: [512, 640], &#34;count&#34;: 2}, &#34;16&#34;: {&#34;resolution&#34;: [512, 768], &#34;count&#34;: 10}, &#34;17&#34;: {&#34;resolution&#34;: [512, 832], &#34;count&#34;: 3}, &#34;18&#34;: {&#34;resolution&#34;: [512, 896], &#34;count&#34;: 1525}, &#34;19&#34;: {&#34;resolution&#34;: [512, 960], &#34;count&#34;: 2}, &#34;20&#34;: {&#34;resolution&#34;: [512, 1024], &#34;count&#34;: 665}, &#34;21&#34;: {&#34;resolution&#34;: [512, 1088], &#34;count&#34;: 8}, &#34;22&#34;: {&#34;resolution&#34;: [576, 576], &#34;count&#34;: 5}, &#34;23&#34;: {&#34;resolution&#34;: [576, 768], &#34;count&#34;: 1}, &#34;24&#34;: {&#34;resolution&#34;: [576, 832], &#34;count&#34;: 667}, &#34;25&#34;: {&#34;resolution&#34;: [576, 896], &#34;count&#34;: 9601}, &#34;26&#34;: {&#34;resolution&#34;: [576, 960], &#34;count&#34;: 872}, &#34;27&#34;: {&#34;resolution&#34;: [576, 1024], &#34;count&#34;: 17}, &#34;28&#34;: {&#34;resolution&#34;: [640, 640], &#34;count&#34;: 3}, &#34;29&#34;: {&#34;resolution&#34;: [640, 768], &#34;count&#34;: 7}, &#34;30&#34;: {&#34;resolution&#34;: [640, 832], &#34;count&#34;: 608}, &#34;31&#34;: {&#34;resolution&#34;: [640, 896], &#34;count&#34;: 90}, &#34;32&#34;: {&#34;resolution&#34;: [704, 640], &#34;count&#34;: 1}, &#34;33&#34;: {&#34;resolution&#34;: [704, 704], &#34;count&#34;: 11}, &#34;34&#34;: {&#34;resolution&#34;: [704, 768], &#34;count&#34;: 1}, &#34;35&#34;: {&#34;resolution&#34;: [704, 832], &#34;count&#34;: 1}, &#34;36&#34;: {&#34;resolution&#34;: [768, 640], &#34;count&#34;: 225}, &#34;37&#34;: {&#34;resolution&#34;: [768, 704], &#34;count&#34;: 6}, &#34;38&#34;: {&#34;resolution&#34;: [768, 768], &#34;count&#34;: 74442}, &#34;39&#34;: {&#34;resolution&#34;: [832, 576], &#34;count&#34;: 23784}, &#34;40&#34;: {&#34;resolution&#34;: [832, 640], &#34;count&#34;: 554}, &#34;41&#34;: {&#34;resolution&#34;: [896, 512], &#34;count&#34;: 1235}, &#34;42&#34;: {&#34;resolution&#34;: [896, 576], &#34;count&#34;: 50}, &#34;43&#34;: {&#34;resolution&#34;: [896, 640], &#34;count&#34;: 88}, &#34;44&#34;: {&#34;resolution&#34;: [960, 512], &#34;count&#34;: 165}, &#34;45&#34;: {&#34;resolution&#34;: [960, 576], &#34;count&#34;: 5246}, &#34;46&#34;: {&#34;resolution&#34;: [1024, 448], &#34;count&#34;: 5}, &#34;47&#34;: {&#34;resolution&#34;: [1024, 512], &#34;count&#34;: 1187}, &#34;48&#34;: {&#34;resolution&#34;: [1024, 576], &#34;count&#34;: 40}, &#34;49&#34;: {&#34;resolution&#34;: [1088, 384], &#34;count&#34;: 70}, &#34;50&#34;: {&#34;resolution&#34;: [1088, 448], &#34;count&#34;: 36}, &#34;51&#34;: {&#34;resolution&#34;: [1088, 512], &#34;count&#34;: 3}, &#34;52&#34;: {&#34;resolution&#34;: [1216, 448], &#34;count&#34;: 36}, &#34;53&#34;: {&#34;resolution&#34;: [1344, 320], &#34;count&#34;: 29}, &#34;54&#34;: {&#34;resolution&#34;: [1536, 384], &#34;count&#34;: 1}}, &#34;mean_img_ar_error&#34;: 0.01693107810697896}&#34;, ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> ss_resolution: &#34;(768, 768)&#34;, ss_v2: &#34;True&#34;, ss_cache_latents: &#34;False&#34;, ss_unet_lr: &#34;2e-07&#34;, ss_num_reg_images: &#34;0&#34;, ss_max_token_length: &#34;225&#34;, ss_lr_scheduler: &#34;linear&#34;, ss_reg_dataset_dirs: &#34;{}&#34;, ss_lr_warmup_steps: &#34;303&#34;, ss_num_batches_per_epoch: &#34;1222&#34;, ss_lowram: &#34;False&#34;, ss_multires_noise_iterations: &#34;None&#34;, ss_optimizer: &#34;torch.optim.adamw.AdamW(weight_decay=0.01,betas=(0.9, 0.99))&#34;, # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> More information needed ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> More information needed ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100 - **Hours used:** 64 - **Cloud Provider:** CoreWeave - **Compute Region:** US Main - **Carbon Emitted:** 6.72 # Technical Specifications [optional] ## Model Architecture and Objective Enhance your model&#39;s quality and sharpness using your own pre-trained Unet. ## Compute Infrastructure More information needed ### Hardware 8xA100 ### Software Fully trained only with Kohya S &amp; Shih-Ying Yeh (Kohaku-BlueLeaf) https://arxiv.org/abs/2108.06098 # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** More information needed **APA:** @misc{LyCORIS, author = &#34;Shih-Ying Yeh (Kohaku-BlueLeaf), Yu-Guan Hsieh, Zhidong Gao&#34;, title = &#34;LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion&#34;, howpublished = &#34;\url{https://github.com/KohakuBlueleaf/LyCORIS}&#34;, month = &#34;March&#34;, year = &#34;2023&#34; } # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> More information needed # More Information [optional] More information needed # Model Card Authors [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> idle stoev # Model Card Contact di@ffusion.ai # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> For a1111 Install https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris Download di.FFUSION.ai-tXe-FXAA to /models/Lycoris Option1: Insert <lyco:di.FFUSION.ai-tXe-FXAA:1.0> to prompt No need to split Unet and Text Enc as its only TX encoder there. You can go up to 2x weights Option2: If you need it always ON (ex run a batch from txt file) then you can go to settings / Quicksettings list add sd_lyco restart and you should have a drop-down now 🤟 🥃 </details>
MaitreHibou/week2-q-Taxiv3
MaitreHibou
2023-07-09T21:18:28Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T21:18:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: week2-q-Taxiv3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.81 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MaitreHibou/week2-q-Taxiv3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MaitreHibou/q-FrozenLake-v1-4x4-noSlippery
MaitreHibou
2023-07-09T21:16:20Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-09T21:16:18Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="MaitreHibou/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Neus/GFPGANv1.4
Neus
2023-07-09T21:10:47Z
0
5
null
[ "onnx", "AMD", "CUDA", "stablediffusion", "DirectML", "ONNX", "text-to-image", "region:us" ]
text-to-image
2023-06-24T20:11:06Z
--- pipeline_tag: text-to-image tags: - AMD - CUDA - stablediffusion - DirectML - ONNX --- Model converted to use with, https://github.com/NeusZimmer/ONNX-ModularUI
QinghaoGuan/distilbert-base-uncased-finetuned-emotion
QinghaoGuan
2023-07-09T20:54:43Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T16:26:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9234876879010416 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2214 - Accuracy: 0.9235 - F1: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8519 | 1.0 | 250 | 0.3242 | 0.904 | 0.9007 | | 0.2537 | 2.0 | 500 | 0.2214 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0+cpu - Datasets 2.12.0 - Tokenizers 0.13.2
mvasiliniuc/iva-codeint-kotlin-small
mvasiliniuc
2023-07-09T20:34:09Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "code", "kotlin", "mobile", "generation", "dataset:mvasiliniuc/iva-kotlin-codeint-clean-train", "dataset:mvasiliniuc/iva-kotlin-codeint-clean-valid", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-14T16:14:17Z
--- datasets: - mvasiliniuc/iva-kotlin-codeint-clean-train - mvasiliniuc/iva-kotlin-codeint-clean-valid language: - code tags: - gpt2 - code - kotlin - mobile - generation widget: - text: "/**\n\t* A function that returns the version of the current operating system.\n*/\n" example_title: "Get current device operating system" - text: "/**\n\t* A function that returns the current TimeZone.\n*/\n" example_title: "Get current timezone" - text: "/**\n\t* A data class representing a Bank Account.\n*/\n" example_title: "Data Class - BankAccount" --- iva-codeint-kotlin-small GPT-2 is (small version - 239.4M parameters) trained from scratch to obtain results in the text-to-code task tailored for Kotlin language used in native mobile development (Android). ## Usage ```Python from transformers import pipeline pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-kotlin-small") outputs = pipe("fun printToConsole()") ``` ### Inference ```Python API_URL = "https://api-inference.huggingface.co/models/mvasiliniuc/iva-codeint-kotlin-small" headers = {"Authorization": "Bearer <key>"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": """ /** * A public function that returns the current version of the operating system. */ """ }) pprint.pprint(output, compact=True) ``` ## Training | Config | Value | |------|------------------| | seq length | 1024 | | weight decay | 0.1 | | learning rate | 0.0005 | | max eval steps | -1 | | shuffle buffer | 10000 | | max train steps | 150000 | | mixed precision | fp16 | | num warmup steps | 2000 | | train batch size | 5 | | valid batch size | 5 | | lr scheduler type | cosine | | save checkpoint steps | 15000 | | gradient checkpointing | false | | gradient accumulation steps | 1 | ## Resources Resources used for research: * [Training a causal language model from scratch](https://huggingface.co/learn/nlp-course/chapter7/6) * [CodeParrot a GPT-2 model (1.5B parameters) trained to generate Python code](https://huggingface.co/codeparrot/codeparrot)
Shikshya/tyaani_model
Shikshya
2023-07-09T20:30:58Z
29
0
diffusers
[ "diffusers", "text-to-image", "en", "dataset:Shikshya/revised_tyaani_jwellery_dataset", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-09T14:43:12Z
--- datasets: - Shikshya/revised_tyaani_jwellery_dataset language: - en library_name: diffusers pipeline_tag: text-to-image ---
TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16
TheBloke
2023-07-09T20:24:54Z
5
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-28T09:38:06Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # June Lee's Wizard Vicuna 13B fp16 This is fp16 pytorch format model files for [June Lee's Wizard Vicuna 13B](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/junelee/wizard-vicuna-13b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: June Lee's Wizard Vicuna 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Wizard-Vicuna-13B-HF This is a float16 HF format repo for [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM. This model was converted to float16 to make it easier to load and manage. ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
TheBloke/Pygmalion-13B-SuperHOT-8K-fp16
TheBloke
2023-07-09T20:24:53Z
12
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-27T14:39:04Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # TehVenom's merge of PygmalionAI's Pygmalion 13B fp16 This is fp16 pytorch format model files for [TehVenom's merge of PygmalionAI's Pygmalion 13B](https://huggingface.co/TehVenom/Pygmalion-13b-Merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-13b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Pygmalion-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: TehVenom's merge of PygmalionAI's Pygmalion 13B <h1 style="text-align: center">Pygmalion 13b</h1> <h2 style="text-align: center">A conversational LLaMA fine-tune.</h2> ## Model Details: Pygmalion 13b is a dialogue model based on Meta's LLaMA-13b. This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project. The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition. ## Applying the XORs This models has the XOR files pre-applied out of the box. Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-13b ## Prompting The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] <START> [DIALOGUE HISTORY] You: [User's input message here] [CHARACTER]: ``` Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example: ``` Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests. <START> Assistant: Hello! How may I help you today? You: What is Zork? Assistant: ``` Which will generate something like: ``` Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years." ``` The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete. ## Eval / Benchmark scores Current evals out of the Pygmalion-13b model: <br> <html> <head> <style> table { border:1px solid #b3adad; border-collapse:collapse; padding:5px; } table th { border:1px solid #b3adad; padding:5px; background: #f0f0f0; color: #313030; } table td { border:1px solid #b3adad; text-align:center; padding:5px; background: #ffffff; color: #313030; } </style> </head> <body> <table> <thead> <tr> <th>Model:</th> <th>Wikitext2</th> <th>Ptb-New</th> <th>C4-New</th> </tr> </thead> <tbody> <tr> <td>Pygmalion 13b - 16bit</td> <td>5.710726737976074</td> <td>23.633684158325195</td> <td>7.6324849128723145</td> </tr> </tbody> </table> </body> </html> <br>Thanks to YellowRose#1776 for the numbers. <hr> ## Other notes - When prompted correctly, the model will always start by generating a BOS token. This behavior is an accidental side-effect which we plan to address in future model versions and should not be relied upon. - The model was trained as a LoRA with a somewhat unorthodox configuration which causes errors when used with the current version of `peft`, hence we release it as a full model instead. ## Limitations and biases The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
TheBloke/Chronos-13B-SuperHOT-8K-fp16
TheBloke
2023-07-09T20:24:53Z
14
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-27T13:16:21Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Elinas' Chronos 13B fp16 This is fp16 pytorch format model files for [Elinas' Chronos 13B](https://huggingface.co/elinas/chronos-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Chronos-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Elinas' Chronos 13B # chronos-13b This is the fp16 PyTorch / HF version of **chronos-13b** This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding. Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on. This model uses Alpaca formatting, so for optimal model performance, use: ``` ### Instruction: Your instruction or question here. ### Response: ``` [4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit) [GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML) <!--**Support My Development of New Models** <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>--> -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
TheBloke/Chronos-Hermes-13B-SuperHOT-8K-fp16
TheBloke
2023-07-09T20:24:51Z
15
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-27T08:59:50Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Austism's Chronos Hermes 13B fp16 This is fp16 pytorch format model files for [Austism's Chronos Hermes 13B](https://huggingface.co/Austism/chronos-hermes-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Chronos-Hermes-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Austism's Chronos Hermes 13B ([chronos-13b](https://huggingface.co/elinas/chronos-13b) + [Nous-Hermes-13b](https://huggingface.co/NousResearch/Nous-Hermes-13b)) 75/25 merge This has the aspects of chronos's nature to produce long, descriptive outputs. But with additional coherency and an ability to better obey instructions. Resulting in this model having a great ability to produce proactive storywriting and follow a narrative. This mix contains alot of chronos's writing style and 'flavour' with far less tendency of going AWOL and spouting nonsensical babble. This result was much more successful than my [first chronos merge](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
TheBloke
2023-07-09T20:24:50Z
19
18
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-27T03:55:57Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Wizard Vicuna 13B Uncensored fp16 This is fp16 pytorch format model files for [Eric Hartford's Wizard Vicuna 13B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Eric Hartford's Wizard Vicuna 13B Uncensored This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.