modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 06:34:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 06:33:46
card
stringlengths
11
1.01M
unrealheinrich/carol_realistic_dreambooth
unrealheinrich
2024-01-10T16:18:37Z
2
0
diffusers
[ "diffusers", "art", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2024-01-10T16:16:06Z
--- license: mit language: - en library_name: diffusers pipeline_tag: text-to-image tags: - art ---
Sphealls/gpt2-finetuned-wikitext2
Sphealls
2024-01-10T16:18:18Z
48
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T15:56:05Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: Sphealls/gpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sphealls/gpt2-finetuned-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.5033 - Validation Loss: 6.3566 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.3132 | 6.7758 | 0 | | 6.5033 | 6.3566 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
hyunjerry/distilbert-base-uncased-finetuned-squad
hyunjerry
2024-01-10T16:15:26Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-12-24T06:08:15Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8801 | 1.0 | 507 | 1.7791 | | 1.5209 | 2.0 | 1014 | 1.4818 | | 1.1737 | 3.0 | 1521 | 1.4525 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
invalid-coder/Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp
invalid-coder
2024-01-10T16:13:44Z
13,347
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "jeonsworld/CarbonVillain-en-10.7B-v2", "kyujinpy/Sakura-SOLAR-Instruct", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T16:06:58Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - jeonsworld/CarbonVillain-en-10.7B-v2 - kyujinpy/Sakura-SOLAR-Instruct --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2) * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) ## 🧩 Configuration ```yaml slices: - sources: - model: jeonsworld/CarbonVillain-en-10.7B-v2 layer_range: [0, 48] - model: kyujinpy/Sakura-SOLAR-Instruct layer_range: [0, 48] merge_method: slerp base_model: jeonsworld/CarbonVillain-en-10.7B-v2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors tokenizer_source: union dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "invalid-coder/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
vipinbansal179/SetFit_sms_Analyzer1
vipinbansal179
2024-01-10T16:06:25Z
52
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-01-10T16:04:01Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: 'pay rs.20.00 / c 91xx3402 ganeshramkudisodebur 22 - 09 - 2023 . ref:3648483126 . query ? click http://m.paytm.me/care : ppbl' - text: inform m / s shree salasar balaji tex transfer rs . 10000.00 account . xxxxxxxx2869 yes bank account rtgs / neft / imp - text: undelivered!\nyour hdfc bank debit card 9875 / c 8494\nreason ch shift . case address change , update seamless card delivery > > hdfcbk.io/a/0nzoo052 - text: rs 5000.00 debit / c upi 23 - 09 - 2023 14:21:12 vpa 35890012004230@cnrb - ( upi ref 363290511260)-federal bank - text: 472448 otp set hdfc bank 4 digit login pin . share otp you?call 18002586161 pipeline_tag: text-classification inference: true base_model: sentence-transformers/all-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9715909090909091 name: Accuracy --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2 | <ul><li>'840989 otp proceed canara bank mobile banking . valid 15 minute . share otp . - canara bank . kbl8a1ju0mt'</li><li>'cheque . 000102 issue riya collection rs . 12,000.00 present / c xxxxx546157 return unpaid insufficient fund . team idfc bank'</li><li>'avl bal / c xxxx0959 10 - jul-2022 06:06:24 inr 0.00 . combine avl bal inr 0.00 . use mb app track / c - kotak bank'</li></ul> | | 0 | <ul><li>'/ c . xxxxxxxx7146 debit rs.11933.00 16 - 09 - 23 / c xxxxxxxx4716 credit ( imp ref 325908759095 ) . warm regard , yes bank'</li><li>'send rs.290.00 kotak bank ac x4524 bharatpe90727843812@yesbankltd 13-10-23.upi ref 328684167136 . , kotak.com/fraud'</li><li>'rs.295 transfer / c ... 4322 : lien_marking_fo . total bal : rs.188.8cr . avlbl amt : rs.609.97(28 - 06 - 2022 16:39:53 ) - bank baroda'</li></ul> | | 1 | <ul><li>'rs 15000credite / c xx4524via neft neofirst technology india private- utr ref hsbcn23276508097 ; avail . bal.:rs 215180.62kotak bank'</li><li>'/ c : xx6775 credit rs.60.00 14 - 11 - 2023 10:47:49 upi - id 8733076955@omni ( upi ref 331800008439).-canara bank'</li><li>'rs.28 credit / c ... 7783 upi/323962847509 kiwicashback_ax . total bal : rs.122751.36cr . avlbl amt : rs.94671.36(27 - 08 - 2023 15:37:01 ) - bank baroda'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9716 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vipinbansal179/SetFit_sms_Analyzer1") # Run inference preds = model("472448 otp set hdfc bank 4 digit login pin . share otp you?call 18002586161") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 4 | 23.17 | 65 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 231 | | 1 | 131 | | 2 | 338 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:---------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.2945 | - | | 0.0026 | 50 | 0.3574 | - | | 0.0052 | 100 | 0.2512 | - | | 0.0079 | 150 | 0.2319 | - | | 0.0105 | 200 | 0.2787 | - | | 0.0131 | 250 | 0.2129 | - | | 0.0157 | 300 | 0.2189 | - | | 0.0183 | 350 | 0.0857 | - | | 0.0210 | 400 | 0.0932 | - | | 0.0236 | 450 | 0.065 | - | | 0.0262 | 500 | 0.0553 | - | | 0.0288 | 550 | 0.0674 | - | | 0.0314 | 600 | 0.0239 | - | | 0.0341 | 650 | 0.0054 | - | | 0.0367 | 700 | 0.0025 | - | | 0.0393 | 750 | 0.002 | - | | 0.0419 | 800 | 0.0007 | - | | 0.0446 | 850 | 0.001 | - | | 0.0472 | 900 | 0.0008 | - | | 0.0498 | 950 | 0.0008 | - | | 0.0524 | 1000 | 0.0003 | - | | 0.0550 | 1050 | 0.0012 | - | | 0.0577 | 1100 | 0.002 | - | | 0.0603 | 1150 | 0.0192 | - | | 0.0629 | 1200 | 0.0041 | - | | 0.0655 | 1250 | 0.0002 | - | | 0.0681 | 1300 | 0.0001 | - | | 0.0708 | 1350 | 0.0001 | - | | 0.0734 | 1400 | 0.0001 | - | | 0.0760 | 1450 | 0.0004 | - | | 0.0786 | 1500 | 0.0003 | - | | 0.0812 | 1550 | 0.0002 | - | | 0.0839 | 1600 | 0.0004 | - | | 0.0865 | 1650 | 0.0002 | - | | 0.0891 | 1700 | 0.0002 | - | | 0.0917 | 1750 | 0.0001 | - | | 0.0943 | 1800 | 0.0001 | - | | 0.0970 | 1850 | 0.0001 | - | | 0.0996 | 1900 | 0.0001 | - | | 0.1022 | 1950 | 0.0001 | - | | 0.1048 | 2000 | 0.0001 | - | | 0.1075 | 2050 | 0.0015 | - | | 0.1101 | 2100 | 0.0001 | - | | 0.1127 | 2150 | 0.0001 | - | | 0.1153 | 2200 | 0.0001 | - | | 0.1179 | 2250 | 0.0001 | - | | 0.1206 | 2300 | 0.0 | - | | 0.1232 | 2350 | 0.0001 | - | | 0.1258 | 2400 | 0.0 | - | | 0.1284 | 2450 | 0.0001 | - | | 0.1310 | 2500 | 0.0 | - | | 0.1337 | 2550 | 0.0001 | - | | 0.1363 | 2600 | 0.0 | - | | 0.1389 | 2650 | 0.0001 | - | | 0.1415 | 2700 | 0.0 | - | | 0.1441 | 2750 | 0.0 | - | | 0.1468 | 2800 | 0.0 | - | | 0.1494 | 2850 | 0.0 | - | | 0.1520 | 2900 | 0.0 | - | | 0.1546 | 2950 | 0.0 | - | | 0.1572 | 3000 | 0.0 | - | | 0.1599 | 3050 | 0.0 | - | | 0.1625 | 3100 | 0.0 | - | | 0.1651 | 3150 | 0.0 | - | | 0.1677 | 3200 | 0.0 | - | | 0.1704 | 3250 | 0.0 | - | | 0.1730 | 3300 | 0.0 | - | | 0.1756 | 3350 | 0.0 | - | | 0.1782 | 3400 | 0.0 | - | | 0.1808 | 3450 | 0.0 | - | | 0.1835 | 3500 | 0.0 | - | | 0.1861 | 3550 | 0.0003 | - | | 0.1887 | 3600 | 0.0131 | - | | 0.1913 | 3650 | 0.0004 | - | | 0.1939 | 3700 | 0.0001 | - | | 0.1966 | 3750 | 0.0 | - | | 0.1992 | 3800 | 0.0001 | - | | 0.2018 | 3850 | 0.0002 | - | | 0.2044 | 3900 | 0.0 | - | | 0.2070 | 3950 | 0.0 | - | | 0.2097 | 4000 | 0.0001 | - | | 0.2123 | 4050 | 0.0015 | - | | 0.2149 | 4100 | 0.0002 | - | | 0.2175 | 4150 | 0.0 | - | | 0.2201 | 4200 | 0.0 | - | | 0.2228 | 4250 | 0.0 | - | | 0.2254 | 4300 | 0.0 | - | | 0.2280 | 4350 | 0.0 | - | | 0.2306 | 4400 | 0.0 | - | | 0.2333 | 4450 | 0.0 | - | | 0.2359 | 4500 | 0.0 | - | | 0.2385 | 4550 | 0.0 | - | | 0.2411 | 4600 | 0.0 | - | | 0.2437 | 4650 | 0.0 | - | | 0.2464 | 4700 | 0.0 | - | | 0.2490 | 4750 | 0.0 | - | | 0.2516 | 4800 | 0.0 | - | | 0.2542 | 4850 | 0.0 | - | | 0.2568 | 4900 | 0.0 | - | | 0.2595 | 4950 | 0.0 | - | | 0.2621 | 5000 | 0.0 | - | | 0.2647 | 5050 | 0.0 | - | | 0.2673 | 5100 | 0.0 | - | | 0.2699 | 5150 | 0.0 | - | | 0.2726 | 5200 | 0.0 | - | | 0.2752 | 5250 | 0.0 | - | | 0.2778 | 5300 | 0.0 | - | | 0.2804 | 5350 | 0.0 | - | | 0.2830 | 5400 | 0.0 | - | | 0.2857 | 5450 | 0.0 | - | | 0.2883 | 5500 | 0.0 | - | | 0.2909 | 5550 | 0.0 | - | | 0.2935 | 5600 | 0.0 | - | | 0.2962 | 5650 | 0.0 | - | | 0.2988 | 5700 | 0.0 | - | | 0.3014 | 5750 | 0.0 | - | | 0.3040 | 5800 | 0.0 | - | | 0.3066 | 5850 | 0.0 | - | | 0.3093 | 5900 | 0.0 | - | | 0.3119 | 5950 | 0.0 | - | | 0.3145 | 6000 | 0.0 | - | | 0.3171 | 6050 | 0.0 | - | | 0.3197 | 6100 | 0.0 | - | | 0.3224 | 6150 | 0.0 | - | | 0.3250 | 6200 | 0.0 | - | | 0.3276 | 6250 | 0.0 | - | | 0.3302 | 6300 | 0.0 | - | | 0.3328 | 6350 | 0.0 | - | | 0.3355 | 6400 | 0.0 | - | | 0.3381 | 6450 | 0.0 | - | | 0.3407 | 6500 | 0.0 | - | | 0.3433 | 6550 | 0.0 | - | | 0.3459 | 6600 | 0.0 | - | | 0.3486 | 6650 | 0.0 | - | | 0.3512 | 6700 | 0.0 | - | | 0.3538 | 6750 | 0.0 | - | | 0.3564 | 6800 | 0.0 | - | | 0.3591 | 6850 | 0.0 | - | | 0.3617 | 6900 | 0.0 | - | | 0.3643 | 6950 | 0.0 | - | | 0.3669 | 7000 | 0.0 | - | | 0.3695 | 7050 | 0.0 | - | | 0.3722 | 7100 | 0.0 | - | | 0.3748 | 7150 | 0.0 | - | | 0.3774 | 7200 | 0.0 | - | | 0.3800 | 7250 | 0.0 | - | | 0.3826 | 7300 | 0.0 | - | | 0.3853 | 7350 | 0.0 | - | | 0.3879 | 7400 | 0.0 | - | | 0.3905 | 7450 | 0.0 | - | | 0.3931 | 7500 | 0.0 | - | | 0.3957 | 7550 | 0.0 | - | | 0.3984 | 7600 | 0.0 | - | | 0.4010 | 7650 | 0.0 | - | | 0.4036 | 7700 | 0.0 | - | | 0.4062 | 7750 | 0.0 | - | | 0.4088 | 7800 | 0.0 | - | | 0.4115 | 7850 | 0.0 | - | | 0.4141 | 7900 | 0.0 | - | | 0.4167 | 7950 | 0.0 | - | | 0.4193 | 8000 | 0.0 | - | | 0.4220 | 8050 | 0.0 | - | | 0.4246 | 8100 | 0.0 | - | | 0.4272 | 8150 | 0.0 | - | | 0.4298 | 8200 | 0.0 | - | | 0.4324 | 8250 | 0.0 | - | | 0.4351 | 8300 | 0.0 | - | | 0.4377 | 8350 | 0.0 | - | | 0.4403 | 8400 | 0.0 | - | | 0.4429 | 8450 | 0.0 | - | | 0.4455 | 8500 | 0.0 | - | | 0.4482 | 8550 | 0.0 | - | | 0.4508 | 8600 | 0.0 | - | | 0.4534 | 8650 | 0.0 | - | | 0.4560 | 8700 | 0.0 | - | | 0.4586 | 8750 | 0.0 | - | | 0.4613 | 8800 | 0.0 | - | | 0.4639 | 8850 | 0.0 | - | | 0.4665 | 8900 | 0.0 | - | | 0.4691 | 8950 | 0.0001 | - | | 0.4717 | 9000 | 0.0 | - | | 0.4744 | 9050 | 0.0 | - | | 0.4770 | 9100 | 0.0 | - | | 0.4796 | 9150 | 0.0 | - | | 0.4822 | 9200 | 0.0 | - | | 0.4849 | 9250 | 0.0 | - | | 0.4875 | 9300 | 0.0 | - | | 0.4901 | 9350 | 0.0 | - | | 0.4927 | 9400 | 0.0 | - | | 0.4953 | 9450 | 0.0 | - | | 0.4980 | 9500 | 0.0 | - | | 0.5006 | 9550 | 0.0 | - | | 0.5032 | 9600 | 0.0 | - | | 0.5058 | 9650 | 0.0 | - | | 0.5084 | 9700 | 0.0 | - | | 0.5111 | 9750 | 0.0 | - | | 0.5137 | 9800 | 0.0 | - | | 0.5163 | 9850 | 0.0 | - | | 0.5189 | 9900 | 0.0 | - | | 0.5215 | 9950 | 0.0 | - | | 0.5242 | 10000 | 0.0 | - | | 0.5268 | 10050 | 0.0 | - | | 0.5294 | 10100 | 0.0 | - | | 0.5320 | 10150 | 0.0 | - | | 0.5346 | 10200 | 0.0 | - | | 0.5373 | 10250 | 0.0 | - | | 0.5399 | 10300 | 0.0 | - | | 0.5425 | 10350 | 0.0 | - | | 0.5451 | 10400 | 0.0 | - | | 0.5478 | 10450 | 0.0 | - | | 0.5504 | 10500 | 0.0 | - | | 0.5530 | 10550 | 0.0 | - | | 0.5556 | 10600 | 0.0 | - | | 0.5582 | 10650 | 0.0 | - | | 0.5609 | 10700 | 0.0 | - | | 0.5635 | 10750 | 0.0 | - | | 0.5661 | 10800 | 0.0 | - | | 0.5687 | 10850 | 0.0 | - | | 0.5713 | 10900 | 0.0 | - | | 0.5740 | 10950 | 0.0 | - | | 0.5766 | 11000 | 0.0 | - | | 0.5792 | 11050 | 0.0 | - | | 0.5818 | 11100 | 0.0 | - | | 0.5844 | 11150 | 0.0 | - | | 0.5871 | 11200 | 0.0 | - | | 0.5897 | 11250 | 0.0 | - | | 0.5923 | 11300 | 0.0 | - | | 0.5949 | 11350 | 0.0 | - | | 0.5975 | 11400 | 0.0 | - | | 0.6002 | 11450 | 0.0 | - | | 0.6028 | 11500 | 0.0 | - | | 0.6054 | 11550 | 0.0 | - | | 0.6080 | 11600 | 0.0 | - | | 0.6107 | 11650 | 0.0 | - | | 0.6133 | 11700 | 0.0 | - | | 0.6159 | 11750 | 0.0 | - | | 0.6185 | 11800 | 0.0 | - | | 0.6211 | 11850 | 0.0 | - | | 0.6238 | 11900 | 0.0 | - | | 0.6264 | 11950 | 0.0 | - | | 0.6290 | 12000 | 0.0 | - | | 0.6316 | 12050 | 0.0 | - | | 0.6342 | 12100 | 0.0 | - | | 0.6369 | 12150 | 0.0 | - | | 0.6395 | 12200 | 0.0 | - | | 0.6421 | 12250 | 0.0 | - | | 0.6447 | 12300 | 0.0 | - | | 0.6473 | 12350 | 0.0 | - | | 0.6500 | 12400 | 0.0 | - | | 0.6526 | 12450 | 0.0 | - | | 0.6552 | 12500 | 0.0 | - | | 0.6578 | 12550 | 0.0 | - | | 0.6604 | 12600 | 0.0 | - | | 0.6631 | 12650 | 0.0 | - | | 0.6657 | 12700 | 0.0 | - | | 0.6683 | 12750 | 0.0 | - | | 0.6709 | 12800 | 0.0 | - | | 0.6736 | 12850 | 0.0 | - | | 0.6762 | 12900 | 0.0 | - | | 0.6788 | 12950 | 0.0 | - | | 0.6814 | 13000 | 0.0 | - | | 0.6840 | 13050 | 0.0 | - | | 0.6867 | 13100 | 0.0 | - | | 0.6893 | 13150 | 0.0 | - | | 0.6919 | 13200 | 0.0 | - | | 0.6945 | 13250 | 0.0 | - | | 0.6971 | 13300 | 0.0 | - | | 0.6998 | 13350 | 0.0 | - | | 0.7024 | 13400 | 0.0 | - | | 0.7050 | 13450 | 0.0 | - | | 0.7076 | 13500 | 0.0 | - | | 0.7102 | 13550 | 0.0 | - | | 0.7129 | 13600 | 0.0 | - | | 0.7155 | 13650 | 0.0 | - | | 0.7181 | 13700 | 0.0 | - | | 0.7207 | 13750 | 0.0 | - | | 0.7233 | 13800 | 0.0 | - | | 0.7260 | 13850 | 0.0 | - | | 0.7286 | 13900 | 0.0 | - | | 0.7312 | 13950 | 0.0 | - | | 0.7338 | 14000 | 0.0 | - | | 0.7365 | 14050 | 0.0 | - | | 0.7391 | 14100 | 0.0 | - | | 0.7417 | 14150 | 0.0 | - | | 0.7443 | 14200 | 0.0 | - | | 0.7469 | 14250 | 0.0 | - | | 0.7496 | 14300 | 0.0 | - | | 0.7522 | 14350 | 0.0 | - | | 0.7548 | 14400 | 0.0 | - | | 0.7574 | 14450 | 0.0 | - | | 0.7600 | 14500 | 0.0 | - | | 0.7627 | 14550 | 0.0 | - | | 0.7653 | 14600 | 0.0 | - | | 0.7679 | 14650 | 0.0 | - | | 0.7705 | 14700 | 0.0 | - | | 0.7731 | 14750 | 0.0 | - | | 0.7758 | 14800 | 0.0 | - | | 0.7784 | 14850 | 0.0 | - | | 0.7810 | 14900 | 0.0 | - | | 0.7836 | 14950 | 0.0 | - | | 0.7862 | 15000 | 0.0 | - | | 0.7889 | 15050 | 0.0 | - | | 0.7915 | 15100 | 0.0 | - | | 0.7941 | 15150 | 0.0 | - | | 0.7967 | 15200 | 0.0 | - | | 0.7994 | 15250 | 0.0 | - | | 0.8020 | 15300 | 0.0 | - | | 0.8046 | 15350 | 0.0 | - | | 0.8072 | 15400 | 0.0 | - | | 0.8098 | 15450 | 0.0 | - | | 0.8125 | 15500 | 0.0 | - | | 0.8151 | 15550 | 0.0 | - | | 0.8177 | 15600 | 0.0 | - | | 0.8203 | 15650 | 0.0 | - | | 0.8229 | 15700 | 0.0 | - | | 0.8256 | 15750 | 0.0 | - | | 0.8282 | 15800 | 0.0 | - | | 0.8308 | 15850 | 0.0 | - | | 0.8334 | 15900 | 0.0 | - | | 0.8360 | 15950 | 0.0 | - | | 0.8387 | 16000 | 0.0 | - | | 0.8413 | 16050 | 0.0 | - | | 0.8439 | 16100 | 0.0 | - | | 0.8465 | 16150 | 0.0 | - | | 0.8491 | 16200 | 0.0 | - | | 0.8518 | 16250 | 0.0 | - | | 0.8544 | 16300 | 0.0 | - | | 0.8570 | 16350 | 0.0 | - | | 0.8596 | 16400 | 0.0 | - | | 0.8622 | 16450 | 0.0 | - | | 0.8649 | 16500 | 0.0 | - | | 0.8675 | 16550 | 0.0 | - | | 0.8701 | 16600 | 0.0 | - | | 0.8727 | 16650 | 0.0 | - | | 0.8754 | 16700 | 0.0 | - | | 0.8780 | 16750 | 0.0 | - | | 0.8806 | 16800 | 0.0 | - | | 0.8832 | 16850 | 0.0 | - | | 0.8858 | 16900 | 0.0 | - | | 0.8885 | 16950 | 0.0 | - | | 0.8911 | 17000 | 0.0 | - | | 0.8937 | 17050 | 0.0 | - | | 0.8963 | 17100 | 0.0 | - | | 0.8989 | 17150 | 0.0 | - | | 0.9016 | 17200 | 0.0 | - | | 0.9042 | 17250 | 0.0 | - | | 0.9068 | 17300 | 0.0 | - | | 0.9094 | 17350 | 0.0 | - | | 0.9120 | 17400 | 0.0 | - | | 0.9147 | 17450 | 0.0 | - | | 0.9173 | 17500 | 0.0 | - | | 0.9199 | 17550 | 0.0 | - | | 0.9225 | 17600 | 0.0 | - | | 0.9251 | 17650 | 0.0 | - | | 0.9278 | 17700 | 0.0 | - | | 0.9304 | 17750 | 0.0 | - | | 0.9330 | 17800 | 0.0 | - | | 0.9356 | 17850 | 0.0 | - | | 0.9383 | 17900 | 0.0 | - | | 0.9409 | 17950 | 0.0 | - | | 0.9435 | 18000 | 0.0 | - | | 0.9461 | 18050 | 0.0 | - | | 0.9487 | 18100 | 0.0 | - | | 0.9514 | 18150 | 0.0 | - | | 0.9540 | 18200 | 0.0 | - | | 0.9566 | 18250 | 0.0 | - | | 0.9592 | 18300 | 0.0 | - | | 0.9618 | 18350 | 0.0 | - | | 0.9645 | 18400 | 0.0 | - | | 0.9671 | 18450 | 0.0 | - | | 0.9697 | 18500 | 0.0 | - | | 0.9723 | 18550 | 0.0 | - | | 0.9749 | 18600 | 0.0 | - | | 0.9776 | 18650 | 0.0 | - | | 0.9802 | 18700 | 0.0 | - | | 0.9828 | 18750 | 0.0 | - | | 0.9854 | 18800 | 0.0 | - | | 0.9880 | 18850 | 0.0 | - | | 0.9907 | 18900 | 0.0 | - | | 0.9933 | 18950 | 0.0 | - | | 0.9959 | 19000 | 0.0 | - | | 0.9985 | 19050 | 0.0 | - | | **1.0** | **19078** | **-** | **0.0437** | | 1.0012 | 19100 | 0.0 | - | | 1.0038 | 19150 | 0.0 | - | | 1.0064 | 19200 | 0.0 | - | | 1.0090 | 19250 | 0.0 | - | | 1.0116 | 19300 | 0.0 | - | | 1.0143 | 19350 | 0.0 | - | | 1.0169 | 19400 | 0.0 | - | | 1.0195 | 19450 | 0.3698 | - | | 1.0221 | 19500 | 0.1546 | - | | 1.0247 | 19550 | 0.0179 | - | | 1.0274 | 19600 | 0.0004 | - | | 1.0300 | 19650 | 0.0005 | - | | 1.0326 | 19700 | 0.0 | - | | 1.0352 | 19750 | 0.0002 | - | | 1.0378 | 19800 | 0.0 | - | | 1.0405 | 19850 | 0.0 | - | | 1.0431 | 19900 | 0.0 | - | | 1.0457 | 19950 | 0.0002 | - | | 1.0483 | 20000 | 0.0011 | - | | 1.0509 | 20050 | 0.0 | - | | 1.0536 | 20100 | 0.0 | - | | 1.0562 | 20150 | 0.0 | - | | 1.0588 | 20200 | 0.0003 | - | | 1.0614 | 20250 | 0.0 | - | | 1.0641 | 20300 | 0.0003 | - | | 1.0667 | 20350 | 0.0003 | - | | 1.0693 | 20400 | 0.0 | - | | 1.0719 | 20450 | 0.0 | - | | 1.0745 | 20500 | 0.0 | - | | 1.0772 | 20550 | 0.0 | - | | 1.0798 | 20600 | 0.0 | - | | 1.0824 | 20650 | 0.0 | - | | 1.0850 | 20700 | 0.0 | - | | 1.0876 | 20750 | 0.0 | - | | 1.0903 | 20800 | 0.0 | - | | 1.0929 | 20850 | 0.0 | - | | 1.0955 | 20900 | 0.0 | - | | 1.0981 | 20950 | 0.0 | - | | 1.1007 | 21000 | 0.0 | - | | 1.1034 | 21050 | 0.0 | - | | 1.1060 | 21100 | 0.0 | - | | 1.1086 | 21150 | 0.0 | - | | 1.1112 | 21200 | 0.0 | - | | 1.1138 | 21250 | 0.0 | - | | 1.1165 | 21300 | 0.0 | - | | 1.1191 | 21350 | 0.0 | - | | 1.1217 | 21400 | 0.0 | - | | 1.1243 | 21450 | 0.0 | - | | 1.1270 | 21500 | 0.0 | - | | 1.1296 | 21550 | 0.0 | - | | 1.1322 | 21600 | 0.0 | - | | 1.1348 | 21650 | 0.0 | - | | 1.1374 | 21700 | 0.0 | - | | 1.1401 | 21750 | 0.0 | - | | 1.1427 | 21800 | 0.0 | - | | 1.1453 | 21850 | 0.0 | - | | 1.1479 | 21900 | 0.0 | - | | 1.1505 | 21950 | 0.0 | - | | 1.1532 | 22000 | 0.0 | - | | 1.1558 | 22050 | 0.0 | - | | 1.1584 | 22100 | 0.0 | - | | 1.1610 | 22150 | 0.0 | - | | 1.1636 | 22200 | 0.0 | - | | 1.1663 | 22250 | 0.0 | - | | 1.1689 | 22300 | 0.0 | - | | 1.1715 | 22350 | 0.0 | - | | 1.1741 | 22400 | 0.0 | - | | 1.1767 | 22450 | 0.0 | - | | 1.1794 | 22500 | 0.0 | - | | 1.1820 | 22550 | 0.0 | - | | 1.1846 | 22600 | 0.0 | - | | 1.1872 | 22650 | 0.0 | - | | 1.1899 | 22700 | 0.0 | - | | 1.1925 | 22750 | 0.0 | - | | 1.1951 | 22800 | 0.0 | - | | 1.1977 | 22850 | 0.0 | - | | 1.2003 | 22900 | 0.0 | - | | 1.2030 | 22950 | 0.0 | - | | 1.2056 | 23000 | 0.0 | - | | 1.2082 | 23050 | 0.0 | - | | 1.2108 | 23100 | 0.0 | - | | 1.2134 | 23150 | 0.0 | - | | 1.2161 | 23200 | 0.0 | - | | 1.2187 | 23250 | 0.0 | - | | 1.2213 | 23300 | 0.0 | - | | 1.2239 | 23350 | 0.0 | - | | 1.2265 | 23400 | 0.0 | - | | 1.2292 | 23450 | 0.0 | - | | 1.2318 | 23500 | 0.0 | - | | 1.2344 | 23550 | 0.0 | - | | 1.2370 | 23600 | 0.0 | - | | 1.2396 | 23650 | 0.0 | - | | 1.2423 | 23700 | 0.0 | - | | 1.2449 | 23750 | 0.0 | - | | 1.2475 | 23800 | 0.0 | - | | 1.2501 | 23850 | 0.0 | - | | 1.2528 | 23900 | 0.0 | - | | 1.2554 | 23950 | 0.0 | - | | 1.2580 | 24000 | 0.0 | - | | 1.2606 | 24050 | 0.0 | - | | 1.2632 | 24100 | 0.0 | - | | 1.2659 | 24150 | 0.0 | - | | 1.2685 | 24200 | 0.0 | - | | 1.2711 | 24250 | 0.0 | - | | 1.2737 | 24300 | 0.0 | - | | 1.2763 | 24350 | 0.0 | - | | 1.2790 | 24400 | 0.0 | - | | 1.2816 | 24450 | 0.0 | - | | 1.2842 | 24500 | 0.0 | - | | 1.2868 | 24550 | 0.0 | - | | 1.2894 | 24600 | 0.0 | - | | 1.2921 | 24650 | 0.0 | - | | 1.2947 | 24700 | 0.0 | - | | 1.2973 | 24750 | 0.0 | - | | 1.2999 | 24800 | 0.0 | - | | 1.3025 | 24850 | 0.0 | - | | 1.3052 | 24900 | 0.0 | - | | 1.3078 | 24950 | 0.0 | - | | 1.3104 | 25000 | 0.0 | - | | 1.3130 | 25050 | 0.0 | - | | 1.3157 | 25100 | 0.0 | - | | 1.3183 | 25150 | 0.0 | - | | 1.3209 | 25200 | 0.0 | - | | 1.3235 | 25250 | 0.0 | - | | 1.3261 | 25300 | 0.0 | - | | 1.3288 | 25350 | 0.0 | - | | 1.3314 | 25400 | 0.0 | - | | 1.3340 | 25450 | 0.0 | - | | 1.3366 | 25500 | 0.0 | - | | 1.3392 | 25550 | 0.0 | - | | 1.3419 | 25600 | 0.0 | - | | 1.3445 | 25650 | 0.0 | - | | 1.3471 | 25700 | 0.0 | - | | 1.3497 | 25750 | 0.0 | - | | 1.3523 | 25800 | 0.0 | - | | 1.3550 | 25850 | 0.0 | - | | 1.3576 | 25900 | 0.0 | - | | 1.3602 | 25950 | 0.0 | - | | 1.3628 | 26000 | 0.0 | - | | 1.3654 | 26050 | 0.0 | - | | 1.3681 | 26100 | 0.0 | - | | 1.3707 | 26150 | 0.0 | - | | 1.3733 | 26200 | 0.0 | - | | 1.3759 | 26250 | 0.0 | - | | 1.3786 | 26300 | 0.0 | - | | 1.3812 | 26350 | 0.0 | - | | 1.3838 | 26400 | 0.0 | - | | 1.3864 | 26450 | 0.0 | - | | 1.3890 | 26500 | 0.0 | - | | 1.3917 | 26550 | 0.0 | - | | 1.3943 | 26600 | 0.0 | - | | 1.3969 | 26650 | 0.0 | - | | 1.3995 | 26700 | 0.0 | - | | 1.4021 | 26750 | 0.0 | - | | 1.4048 | 26800 | 0.0 | - | | 1.4074 | 26850 | 0.0 | - | | 1.4100 | 26900 | 0.0 | - | | 1.4126 | 26950 | 0.0 | - | | 1.4152 | 27000 | 0.0 | - | | 1.4179 | 27050 | 0.0 | - | | 1.4205 | 27100 | 0.0 | - | | 1.4231 | 27150 | 0.0 | - | | 1.4257 | 27200 | 0.0 | - | | 1.4283 | 27250 | 0.0 | - | | 1.4310 | 27300 | 0.0 | - | | 1.4336 | 27350 | 0.0 | - | | 1.4362 | 27400 | 0.0 | - | | 1.4388 | 27450 | 0.0 | - | | 1.4415 | 27500 | 0.0 | - | | 1.4441 | 27550 | 0.0 | - | | 1.4467 | 27600 | 0.0 | - | | 1.4493 | 27650 | 0.0 | - | | 1.4519 | 27700 | 0.0 | - | | 1.4546 | 27750 | 0.0 | - | | 1.4572 | 27800 | 0.0 | - | | 1.4598 | 27850 | 0.0 | - | | 1.4624 | 27900 | 0.0 | - | | 1.4650 | 27950 | 0.0 | - | | 1.4677 | 28000 | 0.0 | - | | 1.4703 | 28050 | 0.0 | - | | 1.4729 | 28100 | 0.0 | - | | 1.4755 | 28150 | 0.0 | - | | 1.4781 | 28200 | 0.0 | - | | 1.4808 | 28250 | 0.0 | - | | 1.4834 | 28300 | 0.0 | - | | 1.4860 | 28350 | 0.0 | - | | 1.4886 | 28400 | 0.0 | - | | 1.4912 | 28450 | 0.0 | - | | 1.4939 | 28500 | 0.0 | - | | 1.4965 | 28550 | 0.0 | - | | 1.4991 | 28600 | 0.0 | - | | 1.5017 | 28650 | 0.0 | - | | 1.5044 | 28700 | 0.0 | - | | 1.5070 | 28750 | 0.0 | - | | 1.5096 | 28800 | 0.0 | - | | 1.5122 | 28850 | 0.0 | - | | 1.5148 | 28900 | 0.0 | - | | 1.5175 | 28950 | 0.0 | - | | 1.5201 | 29000 | 0.0 | - | | 1.5227 | 29050 | 0.0 | - | | 1.5253 | 29100 | 0.0 | - | | 1.5279 | 29150 | 0.0 | - | | 1.5306 | 29200 | 0.0 | - | | 1.5332 | 29250 | 0.0 | - | | 1.5358 | 29300 | 0.0 | - | | 1.5384 | 29350 | 0.0 | - | | 1.5410 | 29400 | 0.0 | - | | 1.5437 | 29450 | 0.0 | - | | 1.5463 | 29500 | 0.0 | - | | 1.5489 | 29550 | 0.0 | - | | 1.5515 | 29600 | 0.0 | - | | 1.5541 | 29650 | 0.0 | - | | 1.5568 | 29700 | 0.0 | - | | 1.5594 | 29750 | 0.0 | - | | 1.5620 | 29800 | 0.0 | - | | 1.5646 | 29850 | 0.0 | - | | 1.5673 | 29900 | 0.0 | - | | 1.5699 | 29950 | 0.0 | - | | 1.5725 | 30000 | 0.0 | - | | 1.5751 | 30050 | 0.0 | - | | 1.5777 | 30100 | 0.0 | - | | 1.5804 | 30150 | 0.0 | - | | 1.5830 | 30200 | 0.0 | - | | 1.5856 | 30250 | 0.0 | - | | 1.5882 | 30300 | 0.0 | - | | 1.5908 | 30350 | 0.0 | - | | 1.5935 | 30400 | 0.0 | - | | 1.5961 | 30450 | 0.0 | - | | 1.5987 | 30500 | 0.0 | - | | 1.6013 | 30550 | 0.0 | - | | 1.6039 | 30600 | 0.0 | - | | 1.6066 | 30650 | 0.0 | - | | 1.6092 | 30700 | 0.0 | - | | 1.6118 | 30750 | 0.0 | - | | 1.6144 | 30800 | 0.0 | - | | 1.6170 | 30850 | 0.0 | - | | 1.6197 | 30900 | 0.0 | - | | 1.6223 | 30950 | 0.0 | - | | 1.6249 | 31000 | 0.0 | - | | 1.6275 | 31050 | 0.0 | - | | 1.6301 | 31100 | 0.0 | - | | 1.6328 | 31150 | 0.0 | - | | 1.6354 | 31200 | 0.0 | - | | 1.6380 | 31250 | 0.0 | - | | 1.6406 | 31300 | 0.0 | - | | 1.6433 | 31350 | 0.0 | - | | 1.6459 | 31400 | 0.0 | - | | 1.6485 | 31450 | 0.0 | - | | 1.6511 | 31500 | 0.0 | - | | 1.6537 | 31550 | 0.0 | - | | 1.6564 | 31600 | 0.0 | - | | 1.6590 | 31650 | 0.0 | - | | 1.6616 | 31700 | 0.0 | - | | 1.6642 | 31750 | 0.0 | - | | 1.6668 | 31800 | 0.0 | - | | 1.6695 | 31850 | 0.0 | - | | 1.6721 | 31900 | 0.0 | - | | 1.6747 | 31950 | 0.0 | - | | 1.6773 | 32000 | 0.0 | - | | 1.6799 | 32050 | 0.0 | - | | 1.6826 | 32100 | 0.0 | - | | 1.6852 | 32150 | 0.0 | - | | 1.6878 | 32200 | 0.0 | - | | 1.6904 | 32250 | 0.0 | - | | 1.6930 | 32300 | 0.0 | - | | 1.6957 | 32350 | 0.0 | - | | 1.6983 | 32400 | 0.0 | - | | 1.7009 | 32450 | 0.0 | - | | 1.7035 | 32500 | 0.0 | - | | 1.7062 | 32550 | 0.0 | - | | 1.7088 | 32600 | 0.0 | - | | 1.7114 | 32650 | 0.0 | - | | 1.7140 | 32700 | 0.0 | - | | 1.7166 | 32750 | 0.0 | - | | 1.7193 | 32800 | 0.0 | - | | 1.7219 | 32850 | 0.0 | - | | 1.7245 | 32900 | 0.0 | - | | 1.7271 | 32950 | 0.0 | - | | 1.7297 | 33000 | 0.0 | - | | 1.7324 | 33050 | 0.0 | - | | 1.7350 | 33100 | 0.0 | - | | 1.7376 | 33150 | 0.0 | - | | 1.7402 | 33200 | 0.0 | - | | 1.7428 | 33250 | 0.0 | - | | 1.7455 | 33300 | 0.0 | - | | 1.7481 | 33350 | 0.0 | - | | 1.7507 | 33400 | 0.0 | - | | 1.7533 | 33450 | 0.0 | - | | 1.7559 | 33500 | 0.0 | - | | 1.7586 | 33550 | 0.0 | - | | 1.7612 | 33600 | 0.0 | - | | 1.7638 | 33650 | 0.0 | - | | 1.7664 | 33700 | 0.0 | - | | 1.7691 | 33750 | 0.0 | - | | 1.7717 | 33800 | 0.0 | - | | 1.7743 | 33850 | 0.0 | - | | 1.7769 | 33900 | 0.0 | - | | 1.7795 | 33950 | 0.0 | - | | 1.7822 | 34000 | 0.0 | - | | 1.7848 | 34050 | 0.0 | - | | 1.7874 | 34100 | 0.0 | - | | 1.7900 | 34150 | 0.0 | - | | 1.7926 | 34200 | 0.0 | - | | 1.7953 | 34250 | 0.0 | - | | 1.7979 | 34300 | 0.0 | - | | 1.8005 | 34350 | 0.0 | - | | 1.8031 | 34400 | 0.0 | - | | 1.8057 | 34450 | 0.0 | - | | 1.8084 | 34500 | 0.0 | - | | 1.8110 | 34550 | 0.0 | - | | 1.8136 | 34600 | 0.0 | - | | 1.8162 | 34650 | 0.0 | - | | 1.8188 | 34700 | 0.0 | - | | 1.8215 | 34750 | 0.0 | - | | 1.8241 | 34800 | 0.0 | - | | 1.8267 | 34850 | 0.0 | - | | 1.8293 | 34900 | 0.0 | - | | 1.8320 | 34950 | 0.0 | - | | 1.8346 | 35000 | 0.0 | - | | 1.8372 | 35050 | 0.0 | - | | 1.8398 | 35100 | 0.0 | - | | 1.8424 | 35150 | 0.0 | - | | 1.8451 | 35200 | 0.0 | - | | 1.8477 | 35250 | 0.0 | - | | 1.8503 | 35300 | 0.0 | - | | 1.8529 | 35350 | 0.0 | - | | 1.8555 | 35400 | 0.0 | - | | 1.8582 | 35450 | 0.0 | - | | 1.8608 | 35500 | 0.0 | - | | 1.8634 | 35550 | 0.0 | - | | 1.8660 | 35600 | 0.0 | - | | 1.8686 | 35650 | 0.0 | - | | 1.8713 | 35700 | 0.0 | - | | 1.8739 | 35750 | 0.0 | - | | 1.8765 | 35800 | 0.0 | - | | 1.8791 | 35850 | 0.0 | - | | 1.8817 | 35900 | 0.0 | - | | 1.8844 | 35950 | 0.0 | - | | 1.8870 | 36000 | 0.0 | - | | 1.8896 | 36050 | 0.0 | - | | 1.8922 | 36100 | 0.0 | - | | 1.8949 | 36150 | 0.0 | - | | 1.8975 | 36200 | 0.0 | - | | 1.9001 | 36250 | 0.0 | - | | 1.9027 | 36300 | 0.0 | - | | 1.9053 | 36350 | 0.0 | - | | 1.9080 | 36400 | 0.0 | - | | 1.9106 | 36450 | 0.0 | - | | 1.9132 | 36500 | 0.0 | - | | 1.9158 | 36550 | 0.0 | - | | 1.9184 | 36600 | 0.0 | - | | 1.9211 | 36650 | 0.0 | - | | 1.9237 | 36700 | 0.0 | - | | 1.9263 | 36750 | 0.0 | - | | 1.9289 | 36800 | 0.0 | - | | 1.9315 | 36850 | 0.0 | - | | 1.9342 | 36900 | 0.0 | - | | 1.9368 | 36950 | 0.0 | - | | 1.9394 | 37000 | 0.0 | - | | 1.9420 | 37050 | 0.0 | - | | 1.9446 | 37100 | 0.0 | - | | 1.9473 | 37150 | 0.0 | - | | 1.9499 | 37200 | 0.0 | - | | 1.9525 | 37250 | 0.0 | - | | 1.9551 | 37300 | 0.0 | - | | 1.9578 | 37350 | 0.0 | - | | 1.9604 | 37400 | 0.0 | - | | 1.9630 | 37450 | 0.0 | - | | 1.9656 | 37500 | 0.0 | - | | 1.9682 | 37550 | 0.0 | - | | 1.9709 | 37600 | 0.0 | - | | 1.9735 | 37650 | 0.0 | - | | 1.9761 | 37700 | 0.0 | - | | 1.9787 | 37750 | 0.0 | - | | 1.9813 | 37800 | 0.0 | - | | 1.9840 | 37850 | 0.0 | - | | 1.9866 | 37900 | 0.0 | - | | 1.9892 | 37950 | 0.0 | - | | 1.9918 | 38000 | 0.0 | - | | 1.9944 | 38050 | 0.0 | - | | 1.9971 | 38100 | 0.0 | - | | 1.9997 | 38150 | 0.0 | - | | 2.0 | 38156 | - | 0.0438 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.36.0 - PyTorch: 2.0.0 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
sidaus/hatespeech-commentnews-large-ind-2
sidaus
2024-01-10T16:03:56Z
92
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "id", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-04T15:36:12Z
--- language: - id metrics: - accuracy pipeline_tag: text-classification ---
mibressler/tumproject
mibressler
2024-01-10T15:53:37Z
0
0
null
[ "tensorboard", "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T15:53:32Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
Abhra-loony/QA_model_with_squad
Abhra-loony
2024-01-10T15:30:21Z
49
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-09T06:41:54Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Abhra-loony/QA_model_with_squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Abhra-loony/QA_model_with_squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.3882 - Validation Loss: 1.6437 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.9201 | 1.9250 | 0 | | 1.5912 | 1.6437 | 1 | | 1.3882 | 1.6437 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
Chuanming/ChatGLM3-12B-slerp
Chuanming
2024-01-10T15:24:33Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "THUDM/chatglm3-6b-32k", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T16:25:00Z
--- tags: - merge - mergekit - lazymergekit - THUDM/chatglm3-6b-32k - THUDM/chatglm3-6b-32k --- # ChatGLM3-12B-slerp ChatGLM3-12B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [THUDM/chatglm3-6b-32k](https://huggingface.co/THUDM/chatglm3-6b-32k) * [THUDM/chatglm3-6b-32k](https://huggingface.co/THUDM/chatglm3-6b-32k) ## 🧩 Configuration ```yaml slices: - sources: - model: THUDM/chatglm3-6b-32k layer_range: [0, 28] - model: THUDM/chatglm3-6b-32k layer_range: [0, 28] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Chuanming/ChatGLM3-12B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ailoveydovey/ptn
ailoveydovey
2024-01-10T15:20:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T14:33:23Z
--- license: creativeml-openrail-m ---
daochf/Lora-Meta-Llama2-13b-hf-QandA_2g_v01-v06
daochf
2024-01-10T15:11:50Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
2024-01-10T15:11:17Z
--- library_name: peft base_model: meta-llama/Llama-2-13b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
Perselope/Taxi-v39
Perselope
2024-01-10T15:07:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T15:04:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v39 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="Perselope/Taxi-v39", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
vibhuagrawal/Mixtral-8x7B-Instruct-v0.1-GPTQ
vibhuagrawal
2024-01-10T15:07:53Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "fr", "it", "de", "es", "en", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-10T15:07:52Z
--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 inference: false language: - fr - it - de - es - en license: apache-2.0 model_creator: Mistral AI_ model_name: Mixtral 8X7B Instruct v0.1 model_type: mixtral prompt_template: '[INST] {prompt} [/INST] ' quantized_by: TheBloke widget: - output: text: 'Arr, shiver me timbers! Ye have a llama on yer lawn, ye say? Well, that be a new one for me! Here''s what I''d suggest, arr: 1. Firstly, ensure yer safety. Llamas may look gentle, but they can be protective if they feel threatened. 2. Try to make the area less appealing to the llama. Remove any food sources or water that might be attracting it. 3. Contact local animal control or a wildlife rescue organization. They be the experts and can provide humane ways to remove the llama from yer property. 4. If ye have any experience with animals, you could try to gently herd the llama towards a nearby field or open space. But be careful, arr! Remember, arr, it be important to treat the llama with respect and care. It be a creature just trying to survive, like the rest of us.' text: '[INST] You are a pirate chatbot who always responds with Arr and pirate speak! There''s a llama on my lawn, how can I get rid of him? [/INST]' --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 8X7B Instruct v0.1 - GPTQ - Model creator: [Mistral AI_](https://huggingface.co/mistralai) - Original model: [Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) <!-- description start --> # Description This repo contains GPTQ model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). Mixtral GPTQs currently require: * Transformers 4.36.0 or later * either, AutoGPTQ 0.6 compiled from source, or * Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers` Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF) * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` [INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. Mixtral GPTQs currently have special requirements - see Description above. <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mixtral-8x7B-Instruct-v0.1-GPTQ`: ```shell mkdir Mixtral-8x7B-Instruct-v0.1-GPTQ huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ --local-dir Mixtral-8x7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mixtral-8x7B-Instruct-v0.1-GPTQ huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Mixtral-8x7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mixtral-8x7B-Instruct-v0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ --local-dir Mixtral-8x7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) **NOTE**: Requires: * Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github * Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`, * or, `Loader: Transformers`, if you installed Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers` Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mixtral-8x7B-Instruct-v0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) Not currently supported for Mixtral models. <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.37.0.dev0 from Github, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later. ```shell pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ DISABLE_QIGEN=1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''[INST] {prompt} [/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github). <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mistral AI_'s Mixtral 8X7B Instruct v0.1 # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
Perselope/Taxi-v40
Perselope
2024-01-10T15:06:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T15:06:36Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v40 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="Perselope/Taxi-v40", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
Aedelon/dqn-SpaceInvadersNoFrameskip-v4
Aedelon
2024-01-10T15:01:12Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T15:00:38Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 793.00 +/- 376.24 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Aedelon -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Aedelon -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Aedelon ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_all_Seed101
behzadnet
2024-01-10T14:55:06Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-01-10T01:53:44Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
BanUrsus/distilbert-base-uncased-fillmask-finetuned-imdb-classifier_nlp-course-chapter7-section2
BanUrsus
2024-01-10T14:51:19Z
90
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:BanUrsus/distilbert-base-uncased-finetuned-imdb_nlp-course-chapter7-section2", "base_model:finetune:BanUrsus/distilbert-base-uncased-finetuned-imdb_nlp-course-chapter7-section2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T14:37:44Z
--- license: apache-2.0 base_model: BanUrsus/distilbert-base-uncased-finetuned-imdb_nlp-course-chapter7-section2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-fillmask-finetuned-imdb-classifier_nlp-course-chapter7-section2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-fillmask-finetuned-imdb-classifier_nlp-course-chapter7-section2 This model is a fine-tuned version of [BanUrsus/distilbert-base-uncased-finetuned-imdb_nlp-course-chapter7-section2](https://huggingface.co/BanUrsus/distilbert-base-uncased-finetuned-imdb_nlp-course-chapter7-section2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2278 - Accuracy: 0.934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3192 | 1.0 | 313 | 0.1938 | 0.932 | | 0.1801 | 2.0 | 626 | 0.1996 | 0.931 | | 0.1166 | 3.0 | 939 | 0.2278 | 0.934 | ### Framework versions - Transformers 4.35.2 - Pytorch 1.11.0+cu102 - Datasets 2.15.0 - Tokenizers 0.15.0
RKessler/EVALutionRelationTrain-4
RKessler
2024-01-10T14:40:42Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T13:10:07Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: EVALutionRelationTrain-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EVALutionRelationTrain-4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6940 - Accuracy: 0.5 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.35 | 100 | 0.7110 | 0.5 | 0.0 | 0.0 | 0.0 | | No log | 0.71 | 200 | 0.7130 | 0.5 | 0.5 | 1.0 | 0.6667 | | No log | 1.06 | 300 | 0.6943 | 0.5 | 0.0 | 0.0 | 0.0 | | No log | 1.42 | 400 | 0.6932 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.7004 | 1.77 | 500 | 0.6946 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7004 | 2.13 | 600 | 0.6999 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7004 | 2.48 | 700 | 0.6963 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.7004 | 2.84 | 800 | 0.6953 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7004 | 3.19 | 900 | 0.6932 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.6979 | 3.55 | 1000 | 0.6942 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6979 | 3.9 | 1100 | 0.6957 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6979 | 4.26 | 1200 | 0.6934 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6979 | 4.61 | 1300 | 0.6971 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.6979 | 4.96 | 1400 | 0.6940 | 0.5 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
yaneq/jan_SDXL_LoRA_500_1e4_9d94
yaneq
2024-01-10T14:38:31Z
10
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-10T14:38:29Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Qwerty man license: openrail++ --- # SDXL LoRA DreamBooth - yaneq/jan_SDXL_LoRA_500_1e4_9d94 <Gallery /> ## Model description These are yaneq/jan_SDXL_LoRA_500_1e4_9d94 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Qwerty man to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](yaneq/jan_SDXL_LoRA_500_1e4_9d94/tree/main) them in the Files & versions tab.
arks2/234
arks2
2024-01-10T14:36:59Z
0
0
null
[ "license:other", "region:us" ]
null
2024-01-10T14:36:59Z
--- license: other license_name: '21312' license_link: LICENSE ---
aarongrainer/ppo-SnowballTarget
aarongrainer
2024-01-10T14:32:01Z
21
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-01-10T14:31:57Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: aarongrainer/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ByunByun/qlora-koalpaca-polyglot-12.8b-600step_positive_blog_keywords_fix
ByunByun
2024-01-10T14:22:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-01-10T14:22:15Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
GedByrne/q-FrozenLake-v1-4x4-noSlippery
GedByrne
2024-01-10T14:16:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T14:16:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="GedByrne/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ayousanz/any-ONNX-models
ayousanz
2024-01-10T14:16:15Z
0
3
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-01-04T15:26:42Z
--- license: apache-2.0 --- LLMの各モデルのONNXに変換したモデル群です 各モデルの使用は、モデルごとのライセンスに基づいて使用してください 以下リポジトリに含まれるモデル一覧です **その他** |モデル名|モデルサイズ|量子化モデル| |---|---|---| |bert-base-uncased|0.11B|⚪︎| |llm-jp/llm-jp-1.3b-v1.0|1.3B|⚪︎| **rinnaモデル** |モデル名|モデルサイズ|量子化モデル| |---|---|---| |rinna/japanese-gpt2-xsmall|0.37B|⚪︎| |rinna/japanese-gpt-neox-3.6b-instruction-ppo|3.6B|⚪︎| |rinna/japanese-gpt-neox-3.6b|3.6B|⚪︎| **cyberagentモデル** |モデル名|モデルサイズ|量子化モデル| |---|---|---| |cyberagent/open-calm-small|0.160B|⚪︎| |cyberagent/open-calm-medium|0.4B|⚪︎| |cyberagent/open-calm-large|0.83B|⚪︎| |cyberagent/open-calm-1b|1B|⚪︎| |cyberagent/open-calm-3b|3B|⚪︎| **lineモデル** |モデル名|モデルサイズ|量子化モデル| |---|---|---| |line-corporation/japanese-large-lm-1.7b|1.7B|⚪︎| |line-corporation/japanese-large-lm-1.7b-instruction-sft|1.7B|⚪︎| |line-corporation/japanese-large-lm-3.6b|3.6B|⚪︎| |line-corporation/japanese-large-lm-3.6b-instruction-sft|3.6B|⚪︎|
intervitens/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-3.5bpw-h6-exl2-rpcal
intervitens
2024-01-10T14:13:51Z
6
1
transformers
[ "transformers", "mixtral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T14:07:16Z
--- base_model: [] tags: - mergekit - merge --- Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. Requires ExllamaV2 version 0.0.11 and up. Original model link: [Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES) Original model README below. *** # Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./extra_hdd/Mixtral-8x7B-v0.1 as a base. ### Models Merged The following models were included in the merge: * ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1 * ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.5 weight: 1.0 - model: ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: ./extra_hdd/Mixtral-8x7B-v0.1 parameters: #normalize: false #int8_mask: true dtype: bfloat16 ```
cyruslau/meinapastel_v6Pastel
cyruslau
2024-01-10T14:13:46Z
0
0
null
[ "region:us" ]
null
2024-01-10T13:48:45Z
Clone from https://civitai.com/models/11866/meinapastel
gayane/BARTSmiles
gayane
2024-01-10T14:05:02Z
212
2
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "arxiv:2211.16349", "endpoints_compatible", "region:us" ]
feature-extraction
2024-01-04T06:54:55Z
The repository is adapted based on: https://huggingface.co/chenxran/bart-smiles/tree/main ```python from transformers import AutoTokenizer, AutoModel, SequenceFeatureExtractor import torch from transformers import AutoTokenizer, AutoModel smiles = "CCC(=O)" tokenizer = AutoTokenizer.from_pretrained("./BARTSmiles/", add_prefix_space=True) inputs = tokenizer(smiles, return_tensors="pt", return_token_type_ids=False, add_special_tokens=True) model = AutoModel.from_pretrained('./BARTSmiles') model.eval() # Use a pipeline as a high-level helper from transformers import pipeline extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer) result = extractor(smiles, return_tensors=True, tokenize_kwargs={'return_token_type_ids':False}) ``` ## Citation @article{chilingaryan2022bartsmiles, title={Bartsmiles: Generative masked language models for molecular representations}, author={Chilingaryan, Gayane and Tamoyan, Hovhannes and Tevosyan, Ani and Babayan, Nelly and Khondkaryan, Lusine and Hambardzumyan, Karen and Navoyan, Zaven and Khachatrian, Hrant and Aghajanyan, Armen}, journal={arXiv preprint arXiv:2211.16349}, year={2022} }
miguelvela/Taxi-unit2
miguelvela
2024-01-10T13:53:18Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T13:22:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-unit2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="miguelvela/Taxi-unit2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rolo9/roberta-base-bne-finetuned-squad-es
rolo9
2024-01-10T13:49:39Z
20
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "dataset:squad_es", "base_model:PlanTL-GOB-ES/roberta-base-bne", "base_model:finetune:PlanTL-GOB-ES/roberta-base-bne", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-02T15:23:03Z
--- license: apache-2.0 base_model: PlanTL-GOB-ES/roberta-base-bne tags: - generated_from_trainer datasets: - squad_es model-index: - name: roberta-base-bne-finetuned-squad-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-squad-es This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the squad_es dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
ByunByun/qlora-koalpaca-polyglot-12.8b-150step_positive_blog_keywords_fix
ByunByun
2024-01-10T13:37:42Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-01-10T13:37:33Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
moulichand/Personalised
moulichand
2024-01-10T13:31:58Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2024-01-10T12:59:47Z
--- base_model: runwayml/stable-diffusion-v1-5 instance_prompt: "Photo of Moulichandbm, a young man with black hair, wearing an orange T-shirt" tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was trained.
FungSung/distilBert_sentiment_test_2023DEC
FungSung
2024-01-10T13:27:34Z
95
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:lxyuan/distilbert-base-multilingual-cased-sentiments-student", "base_model:finetune:lxyuan/distilbert-base-multilingual-cased-sentiments-student", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T13:12:22Z
--- license: apache-2.0 base_model: lxyuan/distilbert-base-multilingual-cased-sentiments-student tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilBert_sentiment_test_2023DEC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBert_sentiment_test_2023DEC This model is a fine-tuned version of [lxyuan/distilbert-base-multilingual-cased-sentiments-student](https://huggingface.co/lxyuan/distilbert-base-multilingual-cased-sentiments-student) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8211 - Y True: [1 2 0 ... 0 2 2] - Y Pred: [1 2 0 ... 0 2 2] - Accuracy: 0.7725 - F1: 0.7729 - Precision: 0.7741 - Recall: 0.7725 - Confusion Matrix: [[ 792 191 18] [ 234 1053 143] [ 38 180 885]] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
uttam333/layoutlmv3-custom_no_text
uttam333
2024-01-10T13:26:50Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-10T12:56:26Z
--- license: mit base_model: microsoft/layoutlm-base-uncased tags: - generated_from_trainer model-index: - name: layoutlmv3-custom_no_text results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-custom_no_text This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2406 - eval_noise: {'precision': 0.772093023255814, 'recall': 0.8019323671497585, 'f1': 0.7867298578199052, 'number': 621} - eval_signal: {'precision': 0.7472868217054264, 'recall': 0.77491961414791, 'f1': 0.7608524072612471, 'number': 622} - eval_overall_precision: 0.7597 - eval_overall_recall: 0.7884 - eval_overall_f1: 0.7738 - eval_overall_accuracy: 0.9518 - eval_runtime: 1.0449 - eval_samples_per_second: 34.452 - eval_steps_per_second: 4.785 - epoch: 19.0 - step: 342 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Aedelon/ppo-LunarLander-v2
Aedelon
2024-01-10T13:25:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-08T19:22:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.04 +/- 17.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ByunByun/qlora-koalpaca-polyglot-12.8b-150step_positive_blog_keywords
ByunByun
2024-01-10T13:15:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-01-10T13:15:26Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
CarlosFMarques/sdxl-kream-model-lora-spotify
CarlosFMarques
2024-01-10T13:14:56Z
1
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-01-09T15:24:24Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 dataset: CarlosFMarques/spotify tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - CarlosFMarques/sdxl-kream-model-lora-spotify These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the CarlosFMarques/spotify dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
LarryAIDraw/jeanne-000008
LarryAIDraw
2024-01-10T13:04:16Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T13:02:13Z
--- license: creativeml-openrail-m --- https://civitai.com/models/258851/jeanne-darc-formaldress
avivnat13/distilbert-base-uncased-finetuned-intent
avivnat13
2024-01-10T12:58:23Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-08T16:53:46Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-intent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-intent This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0049 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2983 | 1.0 | 178 | 0.0111 | 0.9993 | 0.9993 | | 0.0085 | 2.0 | 356 | 0.0049 | 1.0 | 1.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cpu - Datasets 2.16.1 - Tokenizers 0.15.0
LarryAIDraw/raiden_origin-10
LarryAIDraw
2024-01-10T12:55:19Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T12:41:34Z
--- license: creativeml-openrail-m --- https://civitai.com/models/258299?modelVersionId=291251
LarryAIDraw/Stelle-10_IAWolf
LarryAIDraw
2024-01-10T12:54:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T12:40:15Z
--- license: creativeml-openrail-m --- https://civitai.com/models/259625/stelle-honkai-star-rail
FungSung/distilBert_sentiment_test_23DEC
FungSung
2024-01-10T12:52:41Z
96
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:lxyuan/distilbert-base-multilingual-cased-sentiments-student", "base_model:finetune:lxyuan/distilbert-base-multilingual-cased-sentiments-student", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-08T10:55:02Z
--- license: apache-2.0 base_model: lxyuan/distilbert-base-multilingual-cased-sentiments-student tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilBert_sentiment_test_23DEC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBert_sentiment_test_23DEC This model is a fine-tuned version of [lxyuan/distilbert-base-multilingual-cased-sentiments-student](https://huggingface.co/lxyuan/distilbert-base-multilingual-cased-sentiments-student) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4967 - Y True: [1 2 0 ... 0 2 2] - Y Pred: [1 2 0 ... 0 2 2] - Accuracy: 0.7583 - F1: 0.7588 - Precision: 0.7601 - Recall: 0.7583 - Confusion Matrix: [[0 0 0] [0 0 0] [0 0 0]] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
LarryAIDraw/main_lora_yurucamp_inuyama_aoi_v3_pagedAdamW8bit_d64a32_NP5-000010
LarryAIDraw
2024-01-10T12:51:29Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T12:35:05Z
--- license: creativeml-openrail-m --- https://civitai.com/models/7033/yurucampinuyamaaoi
LarryAIDraw/BTW_NoelNiihashi-DEF
LarryAIDraw
2024-01-10T12:50:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T12:34:42Z
--- license: creativeml-openrail-m --- https://civitai.com/models/260008/noel-niihashi-burn-the-witch
Mdebart/mistral-7b-qlora-mental-health
Mdebart
2024-01-10T12:43:23Z
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
2024-01-10T12:40:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
xiawei910/ppo-Pyramids
xiawei910
2024-01-10T12:42:40Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-01-10T12:42:37Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: xiawei910/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Akhi1esh/chat-support-bot-faq
Akhi1esh
2024-01-10T12:34:19Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "region:us" ]
null
2024-01-10T12:34:13Z
--- library_name: peft base_model: tiiuae/falcon-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
damerajee/tinyllama-sft-small-v2
damerajee
2024-01-10T12:18:12Z
86
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Instruct", "TinyLlama", "en", "dataset:yahma/alpaca-cleaned", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T10:09:54Z
--- library_name: transformers base_model: unsloth/tinyllama-bnb-4bit license: mit datasets: - yahma/alpaca-cleaned language: - en pipeline_tag: text-generation tags: - Instruct - TinyLlama --- # Steps to try the model: ### prompt Template ```python alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" ``` ### load the model ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("damerajee/tinyllama-sft-small-v2") model = AutoModelForCausalLM.from_pretrained("damerajee/tinyllama-sft-small-v2") ``` ### Inference ```python inputs = tokenizer( [ alpaca_prompt.format( "best places to visit in india", # instruction "", # input "", # output ) ]*1, return_tensors = "pt") outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True) tokenizer.batch_decode(outputs) ``` # Model Information The base model [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) was Instruct finetuned using [Unsloth](https://github.com/unslothai/unsloth) # Model Limitations The model was trained on a very small dataset so it might not be as good ,will be training on larger dataset soon # Training Details The model was trained for 1 epoch on a free goggle colab which took about 1 hour and 30 mins approximately
aydippy/dippy
aydippy
2024-01-10T12:17:29Z
54
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T06:17:35Z
--- license: apache-2.0 base_model: distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_keras_callback model-index: - name: dippy results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dippy This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0530 - Train Accuracy: 0.9818 - Validation Loss: 0.3083 - Validation Accuracy: 0.8876 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 16838, 'end_learning_rate': 0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1150 | 0.9609 | 0.3167 | 0.8888 | 0 | | 0.0530 | 0.9818 | 0.3083 | 0.8876 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Tokenizers 0.15.0
mtc/mistralai-Mistral-7B-v0.1-arxiv-summarization-5000-no_quantization-lora-full
mtc
2024-01-10T12:10:30Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-10T12:09:34Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
adihere/zephyr-support-chatbot
adihere
2024-01-10T12:09:35Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
2024-01-10T11:32:03Z
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - trl - sft - generated_from_trainer model-index: - name: zephyr-support-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-support-chatbot This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Aedelon/q-Taxi-v3-2
Aedelon
2024-01-10T12:04:09Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T12:04:07Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Aedelon/q-Taxi-v3-2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
modamsko/lora-sdxl-flatillustration
modamsko
2024-01-10T12:03:24Z
3
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-04T12:37:04Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a man in szn style' output: url: "image_0.png" - text: 'a man in szn style' output: url: "image_1.png" - text: 'a man in szn style' output: url: "image_2.png" - text: 'a man in szn style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a cat in szn style license: openrail++ --- # SDXL LoRA DreamBooth - modamsko/lora-sdxl-flatillustration <Gallery /> ## Model description These are modamsko/lora-sdxl-flatillustration LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a cat in szn style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](modamsko/lora-sdxl-flatillustration/tree/main) them in the Files & versions tab.
xiawei910/ppo-SnowballTarget
xiawei910
2024-01-10T12:00:56Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-01-10T12:00:52Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: xiawei910/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
RKessler/EVALutionRelationTrain-3
RKessler
2024-01-10T12:00:04Z
1
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T00:00:34Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: EVALutionRelationTrain-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EVALutionRelationTrain-3 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6950 - Accuracy: 0.5 - Precision: 0.5 - Recall: 1.0 - F1: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.35 | 100 | 0.7296 | 0.5 | 0.0 | 0.0 | 0.0 | | No log | 0.71 | 200 | 0.7154 | 0.5 | 0.5 | 1.0 | 0.6667 | | No log | 1.06 | 300 | 0.6932 | 0.5 | 0.0 | 0.0 | 0.0 | | No log | 1.42 | 400 | 0.6934 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7016 | 1.77 | 500 | 0.6941 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7016 | 2.13 | 600 | 0.6979 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7016 | 2.48 | 700 | 0.6950 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.7016 | 2.84 | 800 | 0.6961 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.7016 | 3.19 | 900 | 0.6939 | 0.5 | 0.5 | 1.0 | 0.6667 | | 0.6977 | 3.55 | 1000 | 0.6960 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6977 | 3.9 | 1100 | 0.6974 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6977 | 4.26 | 1200 | 0.6939 | 0.5 | 0.0 | 0.0 | 0.0 | | 0.6977 | 4.61 | 1300 | 0.6950 | 0.5 | 0.5 | 1.0 | 0.6667 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
kardosdrur/dfm-sentence-encoder-finetune-large-v1
kardosdrur
2024-01-10T11:52:24Z
27
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-08T13:16:59Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # kardosdrur/dfm-sentence-encoder-finetune-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('kardosdrur/dfm-sentence-encoder-finetune-large-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('kardosdrur/dfm-sentence-encoder-finetune-large-v1') model = AutoModel.from_pretrained('kardosdrur/dfm-sentence-encoder-finetune-large-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kardosdrur/dfm-sentence-encoder-finetune-large-v1) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 236754 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 40, "evaluator": "dfm_sentence_trf.evaluation.task_evaluator.TaskListEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 5000, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Kooten/Noromaid-13b-v0.3-QUIP-2bit
Kooten
2024-01-10T11:50:26Z
63
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T07:45:31Z
--- license: cc-by-nc-4.0 --- # Noromaid-13b-v0.3 2bit QUIP# ## Description QUIP quant of [NeverSleep/Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3) I wanted to try quanting in QUIP, it took about 20 hours but it works, its coherent and really tiny ## Other quants: EXL2: [8bpw](https://huggingface.co/Kooten/Noromaid-13b-v0.3-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Noromaid-13b-v0.3-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Noromaid-13b-v0.3-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Noromaid-13b-v0.3-4bpw-exl2) ## Prompt Format ### Silly Tavern Preset [Instruct template](https://files.catbox.moe/nzygwn.json) [Context template](https://files.catbox.moe/l5hguf.json) ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Input: {input} ### Response: ``` ## Contact Kooten on discord
cmp-nct/ShareGPT4V-7B-quant-gguf
cmp-nct
2024-01-10T11:49:33Z
28
4
null
[ "gguf", "license:llama2", "region:us" ]
null
2023-12-06T13:40:51Z
--- license: llama2 --- The original model is here: https://huggingface.co/Lin-Chen/ShareGPT4V-7B This is the K-type quantized variant (for inference with llama.cpp llava-cli) In my tests this is currently the best llava based vision model
Aedelon/Taxi-v3-1
Aedelon
2024-01-10T11:45:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-10T11:44:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Aedelon/Taxi-v3-1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
changwan/solar-finetuned-dialogstudio
changwan
2024-01-10T11:34:48Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:dialogstudio", "base_model:LDCC/LDCC-SOLAR-10.7B", "base_model:adapter:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-4.0", "region:us" ]
null
2024-01-10T09:11:23Z
--- license: cc-by-nc-4.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - dialogstudio base_model: LDCC/LDCC-SOLAR-10.7B model-index: - name: solar-finetuned-dialogstudio results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # solar-finetuned-dialogstudio This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the dialogstudio dataset. It achieves the following results on the evaluation set: - Loss: 2.4770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
jppech/corgy_moi_LoRA
jppech
2024-01-10T11:32:12Z
9
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-10T11:32:10Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of MOI humain license: openrail++ --- # SDXL LoRA DreamBooth - jppech/corgy_moi_LoRA <Gallery /> ## Model description These are jppech/corgy_moi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of MOI humain to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](jppech/corgy_moi_LoRA/tree/main) them in the Files & versions tab.
thrunlab/Mistral-7B-v0.1_cola_relu
thrunlab
2024-01-10T11:29:28Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-05T09:49:40Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer metrics: - accuracy model-index: - name: Mistral-7B-v0.1_cola_relu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1_cola_relu This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3969 - Accuracy: 0.8528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 2 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 750 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.3308 | 0.33 | 10 | 3.2312 | 0.6721 | | 1.9948 | 0.66 | 20 | 1.9259 | 0.5628 | | 1.755 | 0.98 | 30 | 1.6666 | 0.6529 | | 1.2472 | 1.31 | 40 | 1.3599 | 0.6280 | | 0.7 | 1.64 | 50 | 1.0398 | 0.6903 | | 1.0118 | 1.97 | 60 | 0.8845 | 0.6798 | | 0.7947 | 2.3 | 70 | 0.7958 | 0.7200 | | 0.8203 | 2.62 | 80 | 0.7160 | 0.7191 | | 0.8548 | 2.95 | 90 | 0.6607 | 0.7296 | | 0.5277 | 3.28 | 100 | 0.6292 | 0.7430 | | 0.7134 | 3.61 | 110 | 0.6562 | 0.7440 | | 0.7233 | 3.93 | 120 | 0.6248 | 0.7488 | | 0.5547 | 4.26 | 130 | 0.5399 | 0.7488 | | 0.5171 | 4.59 | 140 | 0.5230 | 0.7536 | | 0.492 | 4.92 | 150 | 0.5184 | 0.7632 | | 0.5003 | 5.25 | 160 | 0.4999 | 0.7728 | | 0.4884 | 5.57 | 170 | 0.4827 | 0.7814 | | 0.514 | 5.9 | 180 | 0.5048 | 0.7910 | | 0.3669 | 6.23 | 190 | 0.4783 | 0.7977 | | 0.4786 | 6.56 | 200 | 0.4533 | 0.7948 | | 0.4244 | 6.89 | 210 | 0.4379 | 0.8035 | | 0.3235 | 7.21 | 220 | 0.4439 | 0.8073 | | 0.4307 | 7.54 | 230 | 0.4258 | 0.8236 | | 0.404 | 7.87 | 240 | 0.4184 | 0.8188 | | 0.3772 | 8.2 | 250 | 0.4089 | 0.8207 | | 0.3937 | 8.52 | 260 | 0.4595 | 0.8092 | | 0.3896 | 8.85 | 270 | 0.4148 | 0.8265 | | 0.3296 | 9.18 | 280 | 0.4130 | 0.8236 | | 0.328 | 9.51 | 290 | 0.3944 | 0.8389 | | 0.3383 | 9.84 | 300 | 0.3862 | 0.8322 | | 0.3146 | 10.16 | 310 | 0.3847 | 0.8418 | | 0.3069 | 10.49 | 320 | 0.4192 | 0.8245 | | 0.2732 | 10.82 | 330 | 0.4190 | 0.8313 | | 0.2819 | 11.15 | 340 | 0.4427 | 0.8188 | | 0.3738 | 11.48 | 350 | 0.3807 | 0.8408 | | 0.3004 | 11.8 | 360 | 0.3722 | 0.8437 | | 0.2894 | 12.13 | 370 | 0.3922 | 0.8341 | | 0.2747 | 12.46 | 380 | 0.3782 | 0.8370 | | 0.2812 | 12.79 | 390 | 0.3667 | 0.8514 | | 0.2369 | 13.11 | 400 | 0.3884 | 0.8408 | | 0.2931 | 13.44 | 410 | 0.3807 | 0.8456 | | 0.2702 | 13.77 | 420 | 0.3742 | 0.8399 | | 0.2821 | 14.1 | 430 | 0.3737 | 0.8485 | | 0.2358 | 14.43 | 440 | 0.3739 | 0.8456 | | 0.2326 | 14.75 | 450 | 0.3699 | 0.8514 | | 0.2475 | 15.08 | 460 | 0.3771 | 0.8466 | | 0.2402 | 15.41 | 470 | 0.4064 | 0.8351 | | 0.2435 | 15.74 | 480 | 0.3758 | 0.8456 | | 0.1896 | 16.07 | 490 | 0.3779 | 0.8456 | | 0.2228 | 16.39 | 500 | 0.3868 | 0.8456 | | 0.2149 | 16.72 | 510 | 0.3800 | 0.8485 | | 0.1781 | 17.05 | 520 | 0.3841 | 0.8514 | | 0.1729 | 17.38 | 530 | 0.4000 | 0.8476 | | 0.1897 | 17.7 | 540 | 0.3866 | 0.8456 | | 0.1537 | 18.03 | 550 | 0.4317 | 0.8370 | | 0.1478 | 18.36 | 560 | 0.4197 | 0.8466 | | 0.1686 | 18.69 | 570 | 0.4325 | 0.8418 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Shaimae22/bertemotion
Shaimae22
2024-01-10T11:15:35Z
93
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-06T16:48:15Z
classes = ["Neutral", "Happy", "Sad", "Love", "Anger"]
metamath/kcbert-large-finetuned-nsmc
metamath
2024-01-10T11:09:44Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:beomi/kcbert-large", "base_model:finetune:beomi/kcbert-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T09:51:24Z
--- base_model: beomi/kcbert-large tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: kcbert-large-finetuned-nsmc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kcbert-large-finetuned-nsmc This model is a fine-tuned version of [beomi/kcbert-large](https://huggingface.co/beomi/kcbert-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2576 - Accuracy: 0.9137 - F1: 0.9137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.267 | 1.0 | 3750 | 0.2231 | 0.9106 | 0.9106 | | 0.1427 | 2.0 | 7500 | 0.2576 | 0.9137 | 0.9137 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Omeer/flan-t5-base-samsum
Omeer
2024-01-10T11:09:32Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-10T11:08:41Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
TheBloke/openchat-3.5-0106-AWQ
TheBloke
2024-01-10T11:07:19Z
84
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "openchat", "C-RLFT", "conversational", "arxiv:2309.11235", "arxiv:2303.08774", "base_model:openchat/openchat-3.5-0106", "base_model:quantized:openchat/openchat-3.5-0106", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-10T10:49:16Z
--- base_model: openchat/openchat-3.5-0106 inference: false library_name: transformers license: apache-2.0 model_creator: OpenChat model_name: Openchat 3.5 0106 model_type: mistral pipeline_tag: text-generation prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ' quantized_by: TheBloke tags: - openchat - mistral - C-RLFT --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Openchat 3.5 0106 - AWQ - Model creator: [OpenChat](https://huggingface.co/openchat) - Original model: [Openchat 3.5 0106](https://huggingface.co/openchat/openchat-3.5-0106) <!-- description start --> ## Description This repo contains AWQ model files for [OpenChat's Openchat 3.5 0106](https://huggingface.co/openchat/openchat-3.5-0106). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openchat-3.5-0106-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openchat-3.5-0106-GGUF) * [OpenChat's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openchat/openchat-3.5-0106) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenChat-Correct ``` GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/openchat-3.5-0106-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/openchat-3.5-0106-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `openchat-3.5-0106-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/openchat-3.5-0106-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/openchat-3.5-0106-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/openchat-3.5-0106-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/openchat-3.5-0106-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenChat's Openchat 3.5 0106 <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> <h1>Advancing Open-source Language Models with Mixed-Quality Data</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://openchat.team"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/imoneoi/openchat"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="https://arxiv.org/pdf/2309.11235.pdf"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/pQjnXvNKHY"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> <p align="center" style="margin-top: 0px;"> <span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span> <img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> </p> <div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center; ont-size: 0.5em; border: 0.8em solid #864AF9;"> <a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;"> <span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.5</span> <span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #864AF9; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">0106</span> <span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;"> <br> 🏆 The Overall Best Performing Open Source 7B Model 🏆 <br> 🤖 Outperforms <span style="font-weight: bold;">ChatGPT</span> (March) and <span style="font-weight: bold;">Grok-1</span> 🤖 <br> 🚀<span style="font-size: 1em; font-family: 'Helvetica'; color: black; font-weight: bold;">15</span>-point improvement in Coding over <span style="font-size: 0.9em; font-family: 'Helvetica'; color: black; font-weight: bold;">OpenChat-3.5🚀</span> <br><br><span style="font-size: 1em; font-family: 'Helvetica'; color: #3c72db; font-weight: bold;">New Features</span> <br> 💡 2 Modes: Coding + Generalist, Mathematical Reasoning 💡 <br> 🧑‍⚖️ Experimental support for Evaluator and Feedback capabilities 🧑‍⚖️ </span> </a> </div> <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat-bench-0106.png" style="width: 100%; border-radius: 1em"> </div> <div> <h3> Table of Contents</h3> </div> 1. [Usage](#usage) 2. [Benchmarks](#benchmarks) 3. [Limitations](#limitations) 4. [License](#license) 6. [Citation](#citation) 7. [Acknowledgements](#acknowledgements) <div align="center"> <h2> Usage </h2> </div> To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. | Model | Size | Context | Weights | Serving | |-------------------|------|---------|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------| | OpenChat-3.5-0106 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.5-0106) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.5-0106 --engine-use-ray --worker-use-ray` | <details> <summary>Example request (click to expand)</summary> 💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` 🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Math Correct", "messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}] }' ``` </details> ### Conversation templates 💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` 🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems ``` Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant: ``` ⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token. The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` <div align="center"> <h2> (Experimental) Evaluator / Feedback Capabilities </h2> </div> We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response. ``` ###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: {orig_instruction} ###Response to evaluate: {orig_response} ###Reference Answer (Score 5): {orig_reference_answer} ###Score Rubrics: [{orig_criteria}] Score 1: {orig_score1_description} Score 2: {orig_score2_description} Score 3: {orig_score3_description} Score 4: {orig_score4_description} Score 5: {orig_score5_description} ###Feedback: ``` <div align="center"> <h2> Benchmarks </h2> </div> | Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT | |-----------------------|----------|----------|----------|-----------|----------|----------|------------|----------|----------|----------| | **OpenChat-3.5-0106** | **7B** | **64.5** | 7.8 | **71.3** | **51.5** | **49.1** | 61.0 | 65.8 | **77.4** | 62.2 | | OpenChat-3.5-1210 | **7B** | 63.8 | 7.76 | 68.9 | 49.5 | 48.0 | **61.8** | 65.3 | 77.3 | 61.8 | | OpenChat-3.5 | **7B** | 61.6 | 7.81 | 55.5 | 47.6 | 47.4 | 59.1 | 64.3 | 77.3 | 63.5 | | ChatGPT (March)* | ???B | 61.5 | **7.94** | 48.1 | 47.6 | 47.1 | 57.7 | **67.3** | 74.9 | **70.1** | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 48.2 | 49.4 | 46.5 | 57.5 | 63.8 | 73.5 | 59.9 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 | | Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - | <details> <summary>Evaluation Details(click to expand)</summary> *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). </details> <div> <h3>HumanEval+</h3> </div> | Model | Size | HumanEval+ pass@1 | |-----------------------------|--------|-------------------| | **OpenChat-3.5-0106** | **7B** | **65.9** | | ChatGPT (December 12, 2023) | ???B | 64.6 | | WizardCoder-Python-34B-V1.0 | 34B | 64.6 | | OpenChat 3.5 1210 | 7B | 63.4 | | OpenHermes 2.5 | 7B | 41.5 | <div> <h3>OpenChat-3.5 vs. Grok</h3> </div> 🔥 OpenChat-3.5-0106 (7B) now outperforms Grok-0 (33B) on **all 4 benchmarks** and Grok-1 (???B) on average and **3/4 benchmarks**. | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |-----------------------|-------------|---------|----------|--------|-----------|----------|----------| | **OpenChat-3.5-0106** | Apache-2.0 | **7B** | **61.0** | 65.8 | **71.3** | **29.3** | **77.4** | | OpenChat-3.5-1210 | Apache-2.0 | **7B** | 60.1 | 65.3 | 68.9 | 28.9 | 77.3 | | OpenChat-3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | 77.3 | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ???B | 55.8 | **73** | 63.2 | 23.9 | 62.9 | *: Grok results are reported by [X.AI](https://x.ai/). <div align="center"> <h2> Limitations </h2> </div> **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. <div align="center"> <h2> License </h2> </div> Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. <div align="center"> <h2> Citation </h2> </div> ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` <div align="center"> <h2> 💌 Main Contributor </h2> </div> * Wang Guan [imonenext@gmail.com], Cheng Sijie [csj23@mails.tsinghua.edu.cn], Alpay Ariyak [aariyak@wpi.edu] * We look forward to hearing you and collaborating on this exciting project!
orion-penner/setfit-all-MiniLM-L6-v2-sst2-32-shot-public
orion-penner
2024-01-10T10:59:45Z
52
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
text-classification
2024-01-10T10:59:40Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: [] pipeline_tag: text-classification inference: true --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("orion-penner/setfit-all-MiniLM-L6-v2-sst2-32-shot-public") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Luismatoslm/summarization_model_save
Luismatoslm
2024-01-10T10:54:57Z
48
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-09T09:56:12Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_keras_callback model-index: - name: Luismatoslm/summarization_model_save results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Luismatoslm/summarization_model_save This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.6371 - Validation Loss: 2.9379 - Train Rouge1: 0.133 - Train Rouge2: 0.0436 - Train Rougel: 0.112 - Train Rougelsum: 0.1118 - Train Gen Len: 19.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 3.6371 | 2.9379 | 0.133 | 0.0436 | 0.112 | 0.1118 | 19.0 | 0 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.11.0 - Datasets 2.16.1 - Tokenizers 0.15.0
intervitens/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-3.7bpw-h6-exl2-rpcal
intervitens
2024-01-10T10:48:52Z
8
1
transformers
[ "transformers", "mixtral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T10:42:10Z
--- base_model: [] tags: - mergekit - merge --- Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. Requires ExllamaV2 version 0.0.11 and up. Original model link: [Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES) Original model README below. *** # Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./extra_hdd/Mixtral-8x7B-v0.1 as a base. ### Models Merged The following models were included in the merge: * ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1 * ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.5 weight: 1.0 - model: ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: ./extra_hdd/Mixtral-8x7B-v0.1 parameters: #normalize: false #int8_mask: true dtype: bfloat16 ```
superlazycoder/distilbert-base-uncased-finetuned-squad
superlazycoder
2024-01-10T10:44:51Z
25
1
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-09T23:15:12Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1947 | 1.0 | 5533 | 1.1466 | | 0.938 | 2.0 | 11066 | 1.1017 | | 0.7378 | 3.0 | 16599 | 1.1578 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Deojaklah/Memeh
Deojaklah
2024-01-10T10:32:47Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T10:32:46Z
--- license: creativeml-openrail-m ---
devrishi/roberta-retrained
devrishi
2024-01-10T10:26:05Z
1
0
transformers
[ "transformers", "roberta", "token-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-10T08:48:14Z
--- license: mit language: - en ---
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_newPrompt_Seed101
behzadnet
2024-01-10T10:18:50Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-01-10T10:18:45Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
Deepakkori45/Aspect_term_1
Deepakkori45
2024-01-10T10:14:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-10T10:14:38Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
reach-vb/mistral-lora-mlx-test
reach-vb
2024-01-10T10:09:27Z
6
5
mlx
[ "mlx", "mistral", "pretrained", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
2024-01-10T09:52:46Z
--- language: - en license: apache-2.0 tags: - pretrained - mlx pipeline_tag: text-generation inference: parameters: temperature: 0.7 --- # reach-vb/mistral-lora-mlx-test This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`](). Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/llms/hf_llm python generate.py --model reach-vb/mistral-lora-mlx-test --prompt "My name is" ```
minhtien2405/t5-large-finetuned-text2code
minhtien2405
2024-01-10T09:53:49Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-10T04:03:08Z
--- license: apache-2.0 base_model: t5-large tags: - generated_from_trainer model-index: - name: t5-large-finetuned-text2code results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-large-finetuned-text2code This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
haryoaw/scenario-TCR-data-glue-mrpc-model-bert-base-uncased
haryoaw
2024-01-10T09:49:33Z
98
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-29T20:47:33Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: scenario-TCR-data-glue-mrpc-model-bert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scenario-TCR-data-glue-mrpc-model-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8418 - Accuracy: 0.8431 - F1: 0.8869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6969 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 115 | 0.3798 | 0.8578 | 0.8945 | | No log | 2.0 | 230 | 0.4539 | 0.8113 | 0.8752 | | No log | 3.0 | 345 | 0.4868 | 0.8309 | 0.8852 | | No log | 4.0 | 460 | 0.7806 | 0.8333 | 0.8745 | | 0.2296 | 5.0 | 575 | 0.6896 | 0.8627 | 0.9028 | | 0.2296 | 6.0 | 690 | 1.1197 | 0.8235 | 0.8788 | | 0.2296 | 7.0 | 805 | 0.9408 | 0.8456 | 0.8930 | | 0.2296 | 8.0 | 920 | 0.9377 | 0.8284 | 0.8805 | | 0.0295 | 9.0 | 1035 | 1.0580 | 0.8137 | 0.8770 | | 0.0295 | 10.0 | 1150 | 0.8418 | 0.8431 | 0.8869 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.0 - Tokenizers 0.15.0
tarekxpc/xQAstral
tarekxpc
2024-01-10T09:44:54Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-01-10T07:11:55Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ model-index: - name: xQAstral results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xQAstral This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 500 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
TheBloke/Lumosia-MoE-4x10.7-GPTQ
TheBloke
2024-01-10T09:44:36Z
11
4
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "mergekit", "lazymergekit", "DopeorNope/SOLARC-M-10.7B", "maywell/PiVoT-10.7B-Mistral-v0.2-RP", "kyujinpy/Sakura-SOLAR-Instruct", "jeonsworld/CarbonVillain-en-10.7B-v1", "conversational", "base_model:SteelStorage/Lumosia-MoE-4x10.7", "base_model:quantized:SteelStorage/Lumosia-MoE-4x10.7", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-10T05:01:14Z
--- base_model: Steelskull/Lumosia-MoE-4x10.7 inference: false license: apache-2.0 model_creator: Steel model_name: Lumosia MoE 4X10.7 model_type: mixtral prompt_template: '### System: ### USER:{prompt} ### Assistant: ' quantized_by: TheBloke tags: - moe - merge - mergekit - lazymergekit - DopeorNope/SOLARC-M-10.7B - maywell/PiVoT-10.7B-Mistral-v0.2-RP - kyujinpy/Sakura-SOLAR-Instruct - jeonsworld/CarbonVillain-en-10.7B-v1 --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lumosia MoE 4X10.7 - GPTQ - Model creator: [Steel](https://huggingface.co/Steelskull) - Original model: [Lumosia MoE 4X10.7](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7) <!-- description start --> # Description This repo contains GPTQ model files for [Steel's Lumosia MoE 4X10.7](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF) * [Steel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lumosia ``` ### System: ### USER:{prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.50 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 19.18 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 21.28 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 14.66 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 36.42 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 16.66 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 37.24 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Lumosia-MoE-4x10.7-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Lumosia-MoE-4x10.7-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Lumosia-MoE-4x10.7-GPTQ`: ```shell mkdir Lumosia-MoE-4x10.7-GPTQ huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GPTQ --local-dir Lumosia-MoE-4x10.7-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Lumosia-MoE-4x10.7-GPTQ huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Lumosia-MoE-4x10.7-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Lumosia-MoE-4x10.7-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GPTQ --local-dir Lumosia-MoE-4x10.7-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Lumosia-MoE-4x10.7-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Lumosia-MoE-4x10.7-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Lumosia-MoE-4x10.7-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Lumosia-MoE-4x10.7-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### System: ### USER:{prompt} ### Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Lumosia-MoE-4x10.7-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''### System: ### USER:{prompt} ### Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Steel's Lumosia MoE 4X10.7 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/Qb88YeudOf7MYuGKTirXC.png) # Lumosia-MoE-4x10.7 "Lumosia" was selected as its a MoE of Multiple SOLAR Merges so it really "Lights the way".... its 3am. This is a very experimantal model. its a MoE of all good performing Solar models (based off of personal experiance not open leaderboard), Why? Dunno whated to see what would happen context is maybe 16k? Chat-instruct breaks the model at the moment, not really sure why, even tho it will follow instructions. Waiting on fix for quants of FrankenMoE Template: ``` ### System: ### USER:{prompt} ### Assistant: ``` Settings: ``` Temp: 1.0 min-p: 0.02-0.1 ``` Lumosia-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models: * [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) * [maywell/PiVoT-10.7B-Mistral-v0.2-RP](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2-RP) * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) ## Evals: * Pending ## Examples: ``` Example 1: User: Hello lumi, what is the meaning of life in around 100 words. Lumosia: The meaning of life can vary greatly depending on individual beliefs, values, and experiences. One common perspective suggests that the primary purpose of life is to seek happiness, fulfillment, and personal growth through relationships accomplishments, and spiritual awareness. Living with integrity. compassion, and gratitude while making a positive impact on others and the world around us can also contribute to finding meaning in life UItimately, the meaning of life is subjective and unique to each individual, who must define it for themselves through exploration reflection, and personal choice. ``` ``` Example 2: User: Explain a MoE (Mixture of Experts) in around 100 words Lumosia: A MoE, short for Mixture of Experts, is a machine learning technique utilized in ensemble learning. It involves combining multiple specialist models (experts) to optimize predictive performance. Each expert focuses on a subset of the input features or data domains, making them specialized in particular areas. Through a weighting mechanism, like Gaussian Mixture Model, the predictions of experts are blended to produce a final result. MoE models demonstrate effectiveness in handling complex or ambiguous inputs where a single model might struggle. They are commonly used in natural language processing. computer vision, and speech synthesis. ``` ## 🧩 Configuration ``` yamlbase_model: DopeorNope/SOLARC-M-10.7B gate_mode: hidden dtype: bfloat16 experts: - source_model: DopeorNope/SOLARC-M-10.7B positive_prompts: [""] - source_model: maywell/PiVoT-10.7B-Mistral-v0.2-RP positive_prompts: [""] - source_model: kyujinpy/Sakura-SOLAR-Instruct positive_prompts: [""] - source_model: jeonsworld/CarbonVillain-en-10.7B-v1 positive_prompts: [""] ``` ## 💻 Usage ``` python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Steelskull/Lumosia-MoE-4x10.7" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
SharonTudi/DIALOGUE
SharonTudi
2024-01-10T09:44:24Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-13T16:04:21Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: DIALOGUE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DIALOGUE This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0320 - Accuracy: 0.9902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2742 | 0.31 | 15 | 1.0896 | 0.7353 | | 0.9231 | 0.62 | 30 | 0.7436 | 0.8039 | | 0.7035 | 0.94 | 45 | 0.4280 | 0.9706 | | 0.4361 | 1.25 | 60 | 0.2308 | 1.0 | | 0.3081 | 1.56 | 75 | 0.1590 | 0.9902 | | 0.1794 | 1.88 | 90 | 0.1269 | 0.9706 | | 0.0988 | 2.19 | 105 | 0.0605 | 0.9902 | | 0.0606 | 2.5 | 120 | 0.0617 | 0.9902 | | 0.0372 | 2.81 | 135 | 0.0474 | 0.9902 | | 0.0209 | 3.12 | 150 | 0.0411 | 0.9902 | | 0.0134 | 3.44 | 165 | 0.0326 | 0.9902 | | 0.0099 | 3.75 | 180 | 0.0273 | 0.9902 | | 0.0085 | 4.06 | 195 | 0.0348 | 0.9902 | | 0.0065 | 4.38 | 210 | 0.0335 | 0.9902 | | 0.0058 | 4.69 | 225 | 0.0318 | 0.9902 | | 0.0047 | 5.0 | 240 | 0.0309 | 0.9902 | | 0.0041 | 5.31 | 255 | 0.0289 | 0.9902 | | 0.0041 | 5.62 | 270 | 0.0284 | 0.9902 | | 0.0038 | 5.94 | 285 | 0.0275 | 0.9902 | | 0.0036 | 6.25 | 300 | 0.0270 | 0.9902 | | 0.003 | 6.56 | 315 | 0.0260 | 0.9902 | | 0.0027 | 6.88 | 330 | 0.0270 | 0.9902 | | 0.0026 | 7.19 | 345 | 0.0291 | 0.9902 | | 0.0024 | 7.5 | 360 | 0.0298 | 0.9902 | | 0.0023 | 7.81 | 375 | 0.0283 | 0.9902 | | 0.0024 | 8.12 | 390 | 0.0255 | 0.9902 | | 0.0021 | 8.44 | 405 | 0.0251 | 0.9902 | | 0.0019 | 8.75 | 420 | 0.0260 | 0.9902 | | 0.0019 | 9.06 | 435 | 0.0264 | 0.9902 | | 0.0017 | 9.38 | 450 | 0.0266 | 0.9902 | | 0.0017 | 9.69 | 465 | 0.0276 | 0.9902 | | 0.0017 | 10.0 | 480 | 0.0271 | 0.9902 | | 0.0015 | 10.31 | 495 | 0.0271 | 0.9902 | | 0.0015 | 10.62 | 510 | 0.0271 | 0.9902 | | 0.0015 | 10.94 | 525 | 0.0266 | 0.9902 | | 0.0014 | 11.25 | 540 | 0.0261 | 0.9902 | | 0.0013 | 11.56 | 555 | 0.0260 | 0.9902 | | 0.0013 | 11.88 | 570 | 0.0264 | 0.9902 | | 0.0013 | 12.19 | 585 | 0.0268 | 0.9902 | | 0.0013 | 12.5 | 600 | 0.0270 | 0.9902 | | 0.0012 | 12.81 | 615 | 0.0274 | 0.9902 | | 0.0012 | 13.12 | 630 | 0.0274 | 0.9902 | | 0.0012 | 13.44 | 645 | 0.0278 | 0.9902 | | 0.0011 | 13.75 | 660 | 0.0278 | 0.9902 | | 0.001 | 14.06 | 675 | 0.0278 | 0.9902 | | 0.001 | 14.38 | 690 | 0.0283 | 0.9902 | | 0.001 | 14.69 | 705 | 0.0290 | 0.9902 | | 0.001 | 15.0 | 720 | 0.0286 | 0.9902 | | 0.001 | 15.31 | 735 | 0.0284 | 0.9902 | | 0.001 | 15.62 | 750 | 0.0285 | 0.9902 | | 0.0009 | 15.94 | 765 | 0.0289 | 0.9902 | | 0.0009 | 16.25 | 780 | 0.0298 | 0.9902 | | 0.0009 | 16.56 | 795 | 0.0305 | 0.9902 | | 0.0009 | 16.88 | 810 | 0.0309 | 0.9902 | | 0.0009 | 17.19 | 825 | 0.0304 | 0.9902 | | 0.0008 | 17.5 | 840 | 0.0303 | 0.9902 | | 0.0008 | 17.81 | 855 | 0.0302 | 0.9902 | | 0.0009 | 18.12 | 870 | 0.0301 | 0.9902 | | 0.0008 | 18.44 | 885 | 0.0300 | 0.9902 | | 0.0008 | 18.75 | 900 | 0.0302 | 0.9902 | | 0.0008 | 19.06 | 915 | 0.0300 | 0.9902 | | 0.0007 | 19.38 | 930 | 0.0301 | 0.9902 | | 0.0007 | 19.69 | 945 | 0.0299 | 0.9902 | | 0.0008 | 20.0 | 960 | 0.0304 | 0.9902 | | 0.0007 | 20.31 | 975 | 0.0302 | 0.9902 | | 0.0007 | 20.62 | 990 | 0.0304 | 0.9902 | | 0.0007 | 20.94 | 1005 | 0.0305 | 0.9902 | | 0.0007 | 21.25 | 1020 | 0.0312 | 0.9902 | | 0.0007 | 21.56 | 1035 | 0.0311 | 0.9902 | | 0.0007 | 21.88 | 1050 | 0.0310 | 0.9902 | | 0.0007 | 22.19 | 1065 | 0.0310 | 0.9902 | | 0.0007 | 22.5 | 1080 | 0.0309 | 0.9902 | | 0.0006 | 22.81 | 1095 | 0.0311 | 0.9902 | | 0.0006 | 23.12 | 1110 | 0.0313 | 0.9902 | | 0.0007 | 23.44 | 1125 | 0.0313 | 0.9902 | | 0.0007 | 23.75 | 1140 | 0.0313 | 0.9902 | | 0.0006 | 24.06 | 1155 | 0.0312 | 0.9902 | | 0.0006 | 24.38 | 1170 | 0.0313 | 0.9902 | | 0.0007 | 24.69 | 1185 | 0.0314 | 0.9902 | | 0.0006 | 25.0 | 1200 | 0.0313 | 0.9902 | | 0.0006 | 25.31 | 1215 | 0.0314 | 0.9902 | | 0.0006 | 25.62 | 1230 | 0.0315 | 0.9902 | | 0.0006 | 25.94 | 1245 | 0.0316 | 0.9902 | | 0.0006 | 26.25 | 1260 | 0.0316 | 0.9902 | | 0.0006 | 26.56 | 1275 | 0.0316 | 0.9902 | | 0.0006 | 26.88 | 1290 | 0.0316 | 0.9902 | | 0.0006 | 27.19 | 1305 | 0.0315 | 0.9902 | | 0.0006 | 27.5 | 1320 | 0.0319 | 0.9902 | | 0.0006 | 27.81 | 1335 | 0.0320 | 0.9902 | | 0.0006 | 28.12 | 1350 | 0.0320 | 0.9902 | | 0.0006 | 28.44 | 1365 | 0.0320 | 0.9902 | | 0.0006 | 28.75 | 1380 | 0.0320 | 0.9902 | | 0.0006 | 29.06 | 1395 | 0.0320 | 0.9902 | | 0.0006 | 29.38 | 1410 | 0.0320 | 0.9902 | | 0.0006 | 29.69 | 1425 | 0.0320 | 0.9902 | | 0.0006 | 30.0 | 1440 | 0.0320 | 0.9902 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
TheBloke/Lumosia-MoE-4x10.7-GGUF
TheBloke
2024-01-10T09:43:02Z
215
6
transformers
[ "transformers", "gguf", "mixtral", "moe", "merge", "mergekit", "lazymergekit", "DopeorNope/SOLARC-M-10.7B", "maywell/PiVoT-10.7B-Mistral-v0.2-RP", "kyujinpy/Sakura-SOLAR-Instruct", "jeonsworld/CarbonVillain-en-10.7B-v1", "base_model:SteelStorage/Lumosia-MoE-4x10.7", "base_model:quantized:SteelStorage/Lumosia-MoE-4x10.7", "license:apache-2.0", "region:us", "conversational" ]
null
2024-01-10T05:01:15Z
--- base_model: Steelskull/Lumosia-MoE-4x10.7 inference: false license: apache-2.0 model_creator: Steel model_name: Lumosia MoE 4X10.7 model_type: mixtral prompt_template: '### System: ### USER:{prompt} ### Assistant: ' quantized_by: TheBloke tags: - moe - merge - mergekit - lazymergekit - DopeorNope/SOLARC-M-10.7B - maywell/PiVoT-10.7B-Mistral-v0.2-RP - kyujinpy/Sakura-SOLAR-Instruct - jeonsworld/CarbonVillain-en-10.7B-v1 --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lumosia MoE 4X10.7 - GGUF - Model creator: [Steel](https://huggingface.co/Steelskull) - Original model: [Lumosia MoE 4X10.7](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7) <!-- description start --> ## Description This repo contains GGUF format model files for [Steel's Lumosia MoE 4X10.7](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF) * [Steel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Steelskull/Lumosia-MoE-4x10.7) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lumosia ``` ### System: ### USER:{prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [lumosia-moe-4x10.7.Q2_K.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q2_K.gguf) | Q2_K | 2 | 12.02 GB| 14.52 GB | smallest, significant quality loss - not recommended for most purposes | | [lumosia-moe-4x10.7.Q3_K_S.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q3_K_S.gguf) | Q3_K_S | 3 | 15.57 GB| 18.07 GB | very small, high quality loss | | [lumosia-moe-4x10.7.Q3_K_M.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q3_K_M.gguf) | Q3_K_M | 3 | 15.70 GB| 18.20 GB | very small, high quality loss | | [lumosia-moe-4x10.7.Q3_K_L.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q3_K_L.gguf) | Q3_K_L | 3 | 15.83 GB| 18.33 GB | small, substantial quality loss | | [lumosia-moe-4x10.7.Q4_0.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q4_0.gguf) | Q4_0 | 4 | 20.34 GB| 22.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [lumosia-moe-4x10.7.Q4_K_S.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q4_K_S.gguf) | Q4_K_S | 4 | 20.35 GB| 22.85 GB | small, greater quality loss | | [lumosia-moe-4x10.7.Q4_K_M.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q4_K_M.gguf) | Q4_K_M | 4 | 20.37 GB| 22.87 GB | medium, balanced quality - recommended | | [lumosia-moe-4x10.7.Q5_0.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q5_0.gguf) | Q5_0 | 5 | 24.84 GB| 27.34 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [lumosia-moe-4x10.7.Q5_K_S.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q5_K_S.gguf) | Q5_K_S | 5 | 24.84 GB| 27.34 GB | large, low quality loss - recommended | | [lumosia-moe-4x10.7.Q5_K_M.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q5_K_M.gguf) | Q5_K_M | 5 | 24.85 GB| 27.35 GB | large, very low quality loss - recommended | | [lumosia-moe-4x10.7.Q6_K.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q6_K.gguf) | Q6_K | 6 | 29.62 GB| 32.12 GB | very large, extremely low quality loss | | [lumosia-moe-4x10.7.Q8_0.gguf](https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF/blob/main/lumosia-moe-4x10.7.Q8_0.gguf) | Q8_0 | 8 | 38.36 GB| 40.86 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Lumosia-MoE-4x10.7-GGUF and below it, a specific filename to download, such as: lumosia-moe-4x10.7.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GGUF lumosia-moe-4x10.7.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Lumosia-MoE-4x10.7-GGUF lumosia-moe-4x10.7.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m lumosia-moe-4x10.7.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n\n### USER:{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./lumosia-moe-4x10.7.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "### System:\n\n### USER:{prompt}\n\n### Assistant:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./lumosia-moe-4x10.7.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Steel's Lumosia MoE 4X10.7 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/Qb88YeudOf7MYuGKTirXC.png) # Lumosia-MoE-4x10.7 "Lumosia" was selected as its a MoE of Multiple SOLAR Merges so it really "Lights the way".... its 3am. This is a very experimantal model. its a MoE of all good performing Solar models (based off of personal experiance not open leaderboard), Why? Dunno whated to see what would happen context is maybe 16k? Chat-instruct breaks the model at the moment, not really sure why, even tho it will follow instructions. Waiting on fix for quants of FrankenMoE Template: ``` ### System: ### USER:{prompt} ### Assistant: ``` Settings: ``` Temp: 1.0 min-p: 0.02-0.1 ``` Lumosia-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models: * [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) * [maywell/PiVoT-10.7B-Mistral-v0.2-RP](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2-RP) * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) ## Evals: * Pending ## Examples: ``` Example 1: User: Hello lumi, what is the meaning of life in around 100 words. Lumosia: The meaning of life can vary greatly depending on individual beliefs, values, and experiences. One common perspective suggests that the primary purpose of life is to seek happiness, fulfillment, and personal growth through relationships accomplishments, and spiritual awareness. Living with integrity. compassion, and gratitude while making a positive impact on others and the world around us can also contribute to finding meaning in life UItimately, the meaning of life is subjective and unique to each individual, who must define it for themselves through exploration reflection, and personal choice. ``` ``` Example 2: User: Explain a MoE (Mixture of Experts) in around 100 words Lumosia: A MoE, short for Mixture of Experts, is a machine learning technique utilized in ensemble learning. It involves combining multiple specialist models (experts) to optimize predictive performance. Each expert focuses on a subset of the input features or data domains, making them specialized in particular areas. Through a weighting mechanism, like Gaussian Mixture Model, the predictions of experts are blended to produce a final result. MoE models demonstrate effectiveness in handling complex or ambiguous inputs where a single model might struggle. They are commonly used in natural language processing. computer vision, and speech synthesis. ``` ## 🧩 Configuration ``` yamlbase_model: DopeorNope/SOLARC-M-10.7B gate_mode: hidden dtype: bfloat16 experts: - source_model: DopeorNope/SOLARC-M-10.7B positive_prompts: [""] - source_model: maywell/PiVoT-10.7B-Mistral-v0.2-RP positive_prompts: [""] - source_model: kyujinpy/Sakura-SOLAR-Instruct positive_prompts: [""] - source_model: jeonsworld/CarbonVillain-en-10.7B-v1 positive_prompts: [""] ``` ## 💻 Usage ``` python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Steelskull/Lumosia-MoE-4x10.7" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` <!-- original-model-card end -->
openvoid/Prox-1-Mistral-7B
openvoid
2024-01-10T09:39:13Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "conversational", "en", "base_model:cognitivecomputations/dolphin-2.1-mistral-7b", "base_model:finetune:cognitivecomputations/dolphin-2.1-mistral-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T09:32:08Z
--- base_model: cognitivecomputations/dolphin-2.1-mistral-7b tags: - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation model-index: - name: Prox-1-Mistral-7B results: [] license: apache-2.0 language: - en --- # Prox 1.0 - Mistral 7B ![image/png](https://storage.googleapis.com/xenofold-cdn/prox.png) ## Model description Prox 1.0 Mistral 7B represents a significant evolution in the Mistral Fine-tune series, having been augmented with an extensive array of datasets focused on hacking instructions and Q&A content. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
s3nh/TinyLLama-4x1.1B-MoE
s3nh
2024-01-10T09:36:30Z
134
6
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "conversational", "en", "base_model:Tensoic/TinyLlama-1.1B-3T-openhermes", "base_model:merge:Tensoic/TinyLlama-1.1B-3T-openhermes", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:merge:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:phanerozoic/Tiny-Pirate-1.1b-v0.1", "base_model:merge:phanerozoic/Tiny-Pirate-1.1b-v0.1", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T09:22:44Z
--- base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 - 78health/TinyLlama_1.1B-function-calling - phanerozoic/Tiny-Pirate-1.1b-v0.1 - Tensoic/TinyLlama-1.1B-3T-openhermes tags: - mergekit - merge license: mit language: - en library_name: transformers pipeline_tag: text-generation --- Example usage: ```python from transformers import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE") tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE") input_text = """ ###Input: You are a pirate. tell me a story about wrecked ship. ###Response: """) input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device) output = model.generate(inputs=input_ids, max_length=max_length, do_sample=True, top_k=10, temperature=0.7, pad_token_id=tokenizer.eos_token_id, attention_mask=input_ids.new_ones(input_ids.shape)) tokenizer.decode(output[0], skip_special_tokens=True) ``` This model was possible to create by tremendous work of mergekit developers. I decided to merge tinyLlama models to create mixture of experts. Config used as below: ``` """base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 experts: - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - source_model: 78health/TinyLlama_1.1B-function-calling positive_prompts: - "code" - "python" - "javascript" - "programming" - "algorithm" - source_model: phanerozoic/Tiny-Pirate-1.1b-v0.1 positive_prompts: - "storywriting" - "write" - "scene" - "story" - "character" - source_model: Tensoic/TinyLlama-1.1B-3T-openhermes positive_prompts: - "reason" - "provide" - "instruct" - "summarize" - "count" """ ```
Deojaklah/Memey
Deojaklah
2024-01-10T09:34:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-10T09:34:54Z
--- license: creativeml-openrail-m ---
PracticeLLM/Twice-KoSOLAR-16.1B-test
PracticeLLM
2024-01-10T09:26:25Z
3,667
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "en", "ko", "arxiv:2312.15166", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-30T01:47:43Z
--- language: - en - ko pipeline_tag: text-generation license: cc-by-nc-sa-4.0 tags: - merge --- # **Twice-KoSOLAR-16.1B-test** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **모델 목적** <img src='./solar.png'> 최근, SOLAR-10.7B 모델이 [Depth-Up-Scaling](https://arxiv.org/pdf/2312.15166.pdf)(위의 사진) 방법론을 내세워서 LLM 리더보드에서 좋은 성능을 보이고 있다. 더불어서 `야놀자`에서 만든 `seungduk/KoSOLAR-10.7B-v0.1` 모델은 Ko-LLM 리더보드에 큰 파급력을 불러오면서, 앞으로의 리더보드의 흐름도 바뀔 것으로 예상된다. 여기서 단순한 호기심이 들었다. **Upstage에서 발표한 Depth-Up-Scaling(DUS) 방법론은 mistral-7B 모델 2개를 merge(passthrough)한 방법**이다. 이때 놀랍게도, DUS 방법론을 적용한 `upstage/SOLAR-10.7B-v1.0`모델은 기존의 mistral-7B 모델보다 리더보드에서 높은 성능을 기록했다. (아래의 테이블 참고) 그렇다면, DUS 방법론을 제한없이, 다른 모델에 적용하면 똑같은 결과가 발생할지 너무나 궁금했다. 🙃 실험을 통해서 나의 호기심에 대한 결론을 내려보고자 한다. 😋😋 | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | **66.04** | 62.03 | 84.54 | 65.56 | 45.03 | 83.58 | 55.50 | | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | **66.04** | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 | | [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 | > Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). **Method** Using [Mergekit](https://github.com/cg123/mergekit). - Korean Pretrain-SOTA (12/30) [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) **Merge config** 기존 [`SOLAR-10.7B`](https://arxiv.org/pdf/2312.15166.pdf) 논문에서는 mistral-7B layer를 `layer-24`와 `layer-8`로 구분하여서, `layer-24` 2개를 병합하여 총 `layer-48`를 완성했다. 위의 ratio가 `uses:waste=3:1` 이므로, 같은 비율로 `seungduk/KoSOLAR-10.7B-v0.1` layer를 `layer-36`와 `layer-12`로 구분하고, `layer-36` 2개를 병합하여 총 `layer-72`를 완성시켰다. 자세한 merge config 아래와 같다. ```yaml slices: - sources: - model: seungduk/KoSOLAR-10.7B-v0.1 layer_range: [0, 36] - sources: - model: seungduk/KoSOLAR-10.7B-v0.1 layer_range: [12, 48] merge_method: passthrough dtype: float16 ``` > Share all of things. It is my belief. # **Model Benchmark** ## Open Ko-LLM leaderboard & lm-evaluation-harness(zero-shot) - Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 | | --- | --- | --- | --- | --- | --- | --- | | PracticeLLM/Twice-KoSOLAR-16.1B-test | 50.20 | 45.65 | 57.14 | 51.39 | 42.99 | 53.84 | | [Megastudy/M-SOLAR-10.7B-v1.1-beta](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1-beta) | 55.25 | 51.71 | 60.86 | 54.24 | 47.12 | 62.34 | | [jjourney1125/M-SOLAR-10.7B-v1.0](https://huggingface.co/jjourney1125/M-SOLAR-10.7B-v1.0) | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 | | [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | 52.40 | 47.18 | 59.54 | 52.04 | 41.84 | 61.39 | - Follow up as [beomi/LM-Harness](https://github.com/Beomi/ko-lm-evaluation-harness) ``` gpt2 (pretrained=PracticeLLM/Twice-KoSOLAR-16.1B-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.7201|± |0.0120| | | |macro_f1|0.7073|± |0.0124| |kobest_copa | 0|acc |0.6510|± |0.0151| | | |macro_f1|0.6506|± |0.0151| |kobest_hellaswag| 0|acc |0.4520|± |0.0223| | | |acc_norm|0.5820|± |0.0221| | | |macro_f1|0.4475|± |0.0222| |kobest_sentineg | 0|acc |0.7078|± |0.0229| | | |macro_f1|0.7071|± |0.0229| gpt2 (pretrained=Megastudy/M-SOLAR-10.7B-v1.1-beta), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.7137|± |0.0121| | | |macro_f1|0.6878|± |0.0128| |kobest_copa | 0|acc |0.7060|± |0.0144| | | |macro_f1|0.7054|± |0.0145| |kobest_hellaswag| 0|acc |0.4620|± |0.0223| | | |acc_norm|0.5360|± |0.0223| | | |macro_f1|0.4595|± |0.0223| |kobest_sentineg | 0|acc |0.7431|± |0.0220| | | |macro_f1|0.7295|± |0.0230| gpt2 (pretrained=jjourney1125/M-SOLAR-10.7B-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.5228|± |0.0133| | | |macro_f1|0.3788|± |0.0097| |kobest_copa | 0|acc |0.6860|± |0.0147| | | |macro_f1|0.6858|± |0.0147| |kobest_hellaswag| 0|acc |0.4580|± |0.0223| | | |acc_norm|0.5380|± |0.0223| | | |macro_f1|0.4552|± |0.0222| |kobest_sentineg | 0|acc |0.6474|± |0.0240| | | |macro_f1|0.6012|± |0.0257| gpt2 (pretrained=yanolja/KoSOLAR-10.7B-v0.1), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.8725|± |0.0089| | | |macro_f1|0.8722|± |0.0089| |kobest_copa | 0|acc |0.6850|± |0.0147| | | |macro_f1|0.6844|± |0.0147| |kobest_hellaswag| 0|acc |0.4340|± |0.0222| | | |acc_norm|0.5840|± |0.0221| | | |macro_f1|0.4296|± |0.0221| |kobest_sentineg | 0|acc |0.7506|± |0.0217| | | |macro_f1|0.7505|± |0.0217| ``` ## Open EN-LLM leaderboard & lm-evaluation-harness(zero-shot) - Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | PracticeLLM/Twice-KoSOLAR-16.1B-test | NaN | NaN | NaN | NaN | NaN | NaN | NaN | | [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | **74.40** | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20 | | [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | 66.04 | 62.03 | 84.54 | 65.56 | 45.03 | 83.58 | 55.50 | | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | 66.04 | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 | | [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 | - Follow up as [Eleuther/LM-Harness](https://github.com/EleutherAI/lm-evaluation-harness) ```yaml (will update) ``` # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "PracticeLLM/Twice-KoSOLAR-test" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- Refereces (Model Card) # yanolja/KoSOLAR-10.7B-v0.1 This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), trained on various Korean web-crawled datasets that are publicly available on HuggingFace. The hypothesis was that while maintaining the original performance of the base model, we could add more tokens to the base model's vocabulary by training the embeddings for the new tokens only. The evaluation results seem to indicate that both English and Korean performances were preserved. ## Model Description Most parameters of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) were frozen except for the embed_tokens layer and the lm_head layer. Embeddings for the existing tokens in those layers were frozen during training. The embeddings for the new tokens have been tuned. --- # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!** # **Introduction** We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B. We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model. SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table. Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements ([SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)). For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
afiqlol/lasto4
afiqlol
2024-01-10T09:19:18Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned", "base_model:finetune:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T07:28:33Z
--- base_model: citizenlab/twitter-xlm-roberta-base-sentiment-finetunned tags: - generated_from_trainer metrics: - accuracy model-index: - name: lasto4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lasto4 This model is a fine-tuned version of [citizenlab/twitter-xlm-roberta-base-sentiment-finetunned](https://huggingface.co/citizenlab/twitter-xlm-roberta-base-sentiment-finetunned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5596 - Accuracy: 0.8033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 300 | 0.4855 | 0.8133 | | 0.5151 | 2.0 | 600 | 0.5272 | 0.8117 | | 0.5151 | 3.0 | 900 | 0.5596 | 0.8033 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cpu - Datasets 2.14.5 - Tokenizers 0.15.0
damerajee/Tinyllama-sft-small
damerajee
2024-01-10T09:11:28Z
12
0
peft
[ "peft", "safetensors", "Instruct", "TinyLlama", "text-generation", "en", "dataset:yahma/alpaca-cleaned", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:adapter:unsloth/tinyllama-bnb-4bit", "license:mit", "region:us" ]
text-generation
2024-01-09T16:25:12Z
--- library_name: peft base_model: unsloth/tinyllama-bnb-4bit license: mit datasets: - yahma/alpaca-cleaned language: - en pipeline_tag: text-generation tags: - Instruct - TinyLlama --- # Steps to try the model: ### prompt Template ```python alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" ``` ### load the model ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM ,AutoTokenizer config = PeftConfig.from_pretrained("damerajee/Tinyllama-sft-small") model = AutoModelForCausalLM.from_pretrained("unsloth/tinyllama") tokenizer=AutoTokenizer.from_pretrained("damerajee/Tinyllama-sft-small") model = PeftModel.from_pretrained(model, "damerajee/Tinyllama-sft-small")l") ``` ### Inference ```python inputs = tokenizer( [ alpaca_prompt.format( "i want to learn machine learning help me", "", # input "", # output ) ]*1, return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 312, use_cache = True) tokenizer.batch_decode(outputs) ``` # Model Information The base model [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit)was Instruct finetuned using [Unsloth](https://github.com/unslothai/unsloth) # Training Details The model was trained for 1 epoch on a free goggle colab which took about 1 hour and 30 mins approximately
darshan8950/openhermes-mistral-dpo-gptq
darshan8950
2024-01-10T09:06:03Z
0
0
null
[ "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ", "base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-01-10T09:05:07Z
--- license: apache-2.0 base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ tags: - trl - dpo - generated_from_trainer model-index: - name: openhermes-mistral-dpo-gptq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openhermes-mistral-dpo-gptq This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1934 - Rewards/chosen: 1.5646 - Rewards/rejected: -0.8402 - Rewards/accuracies: 1.0 - Rewards/margins: 2.4048 - Logps/rejected: -45.4271 - Logps/chosen: -277.5632 - Logits/rejected: -1.3185 - Logits/chosen: -2.0273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6717 | 0.01 | 10 | 0.5892 | 0.1318 | -0.0930 | 0.75 | 0.2248 | -37.9552 | -291.8918 | -1.2049 | -2.0262 | | 0.5298 | 0.02 | 20 | 0.3756 | 0.6965 | -0.3112 | 1.0 | 1.0078 | -40.1375 | -286.2441 | -1.2959 | -2.0576 | | 0.3325 | 0.03 | 30 | 0.2663 | 1.1580 | -0.4907 | 1.0 | 1.6486 | -41.9316 | -281.6295 | -1.3242 | -2.0582 | | 0.2179 | 0.04 | 40 | 0.2153 | 1.4040 | -0.7133 | 1.0 | 2.1173 | -44.1586 | -279.1697 | -1.3211 | -2.0374 | | 0.1683 | 0.06 | 50 | 0.1934 | 1.5646 | -0.8402 | 1.0 | 2.4048 | -45.4271 | -277.5632 | -1.3185 | -2.0273 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.0
mecoaoge2/fununun
mecoaoge2
2024-01-10T09:03:41Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-01-10T06:22:18Z
--- license: apache-2.0 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: fununun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fununun This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6908 - Rewards/chosen: 0.0015 - Rewards/rejected: -0.0032 - Rewards/accuracies: 0.7176 - Rewards/margins: 0.0047 - Logps/rejected: -197.2385 - Logps/chosen: -235.0630 - Logits/rejected: -3.0691 - Logits/chosen: -3.1037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.693 | 0.04 | 20 | 0.6927 | 0.0001 | -0.0007 | 0.5614 | 0.0009 | -197.2139 | -235.0765 | -3.0688 | -3.1035 | | 0.6922 | 0.07 | 40 | 0.6919 | 0.0007 | -0.0017 | 0.6440 | 0.0024 | -197.2236 | -235.0704 | -3.0690 | -3.1036 | | 0.6913 | 0.11 | 60 | 0.6913 | 0.0011 | -0.0025 | 0.6886 | 0.0037 | -197.2319 | -235.0664 | -3.0691 | -3.1037 | | 0.6909 | 0.15 | 80 | 0.6909 | 0.0014 | -0.0030 | 0.7098 | 0.0044 | -197.2367 | -235.0639 | -3.0691 | -3.1037 | | 0.6906 | 0.19 | 100 | 0.6908 | 0.0015 | -0.0032 | 0.7176 | 0.0047 | -197.2385 | -235.0630 | -3.0691 | -3.1037 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
lora-moe/belle_dolphine_p18
lora-moe
2024-01-10T09:02:24Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:baichuan-inc/Baichuan2-7B-Base", "base_model:adapter:baichuan-inc/Baichuan2-7B-Base", "region:us" ]
null
2024-01-10T09:01:58Z
--- library_name: peft base_model: baichuan-inc/Baichuan2-7B-Base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
lora-moe/belle_dolphine_p15
lora-moe
2024-01-10T09:01:03Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:baichuan-inc/Baichuan2-7B-Base", "base_model:adapter:baichuan-inc/Baichuan2-7B-Base", "region:us" ]
null
2024-01-10T09:00:45Z
--- library_name: peft base_model: baichuan-inc/Baichuan2-7B-Base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
lora-moe/belle_dolphine_p13
lora-moe
2024-01-10T09:00:15Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:baichuan-inc/Baichuan2-7B-Base", "base_model:adapter:baichuan-inc/Baichuan2-7B-Base", "region:us" ]
null
2024-01-10T08:59:51Z
--- library_name: peft base_model: baichuan-inc/Baichuan2-7B-Base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
lora-moe/belle_dolphine_p10
lora-moe
2024-01-10T08:59:01Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:baichuan-inc/Baichuan2-7B-Base", "base_model:adapter:baichuan-inc/Baichuan2-7B-Base", "region:us" ]
null
2024-01-10T08:58:46Z
--- library_name: peft base_model: baichuan-inc/Baichuan2-7B-Base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
budecosystem/code-millenials-13b
budecosystem
2024-01-10T08:56:06Z
1,489
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-02T09:16:08Z
--- license: llama2 library_name: transformers tags: - code model-index: - name: Code Millenials results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.7621 verified: false --- # Bud Code Millenials 13B Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to jithinvg@bud.studio ### News 🔥🔥🔥 - [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). ### HumanEval <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> For the millenial models, the eval script in the github repo is used for the above result. Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc. ### Models | Model | Checkpoint | HumanEval (+) | MBPP (+) | |---------|-------------|---------------|----------| |Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) | |Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) | |Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) | |Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) | ### 🚀 Quick Start Inference code using the pre-trained model from the Hugging Face model hub ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-13b") model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-13b") template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ### Instruction: {instruction} ### Response:""" instruction = <Your code instruction here> prompt = template.format(instruction=instruction) inputs = tokenizer(prompt, return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` ## Training details The model is trained of 8 A100 80GB for approximately 15hrs. | Hyperparameters | Value | | :----------------------------| :-----: | | per_device_train_batch_size | 2 | | gradient_accumulation_steps | 1 | | epoch | 3 | | steps | 34503 | | learning_rate | 2e-5 | | lr schedular type | cosine | | warmup ratio | 0.1 | | optimizer | adamw | | fp16 | True | | GPU | 8 A100 80GB | ### Important Note - **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
TheBloke/OrcaMaid-v3-13B-32k-GGUF
TheBloke
2024-01-10T08:48:26Z
258
15
transformers
[ "transformers", "gguf", "llama", "text-generation", "base_model:ddh0/OrcaMaid-v3-13b-32k", "base_model:quantized:ddh0/OrcaMaid-v3-13b-32k", "license:other", "region:us" ]
text-generation
2024-01-10T00:39:53Z
--- base_model: ddh0/OrcaMaid-v3-13b-32k inference: false license: other license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE license_name: microsoft-research-license model_creator: ddh0 model_name: Orcamaid v3 13B 32K model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Orcamaid v3 13B 32K - GGUF - Model creator: [ddh0](https://huggingface.co/ddh0) - Original model: [Orcamaid v3 13B 32K](https://huggingface.co/ddh0/OrcaMaid-v3-13b-32k) <!-- description start --> ## Description This repo contains GGUF format model files for [ddh0's Orcamaid v3 13B 32K](https://huggingface.co/ddh0/OrcaMaid-v3-13b-32k). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF) * [ddh0's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ddh0/OrcaMaid-v3-13b-32k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [ddh0's Orcamaid v3 13B 32K](https://huggingface.co/ddh0/OrcaMaid-v3-13b-32k). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [orcamaid-v3-13b-32k.Q2_K.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [orcamaid-v3-13b-32k.Q3_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [orcamaid-v3-13b-32k.Q3_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [orcamaid-v3-13b-32k.Q3_K_L.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [orcamaid-v3-13b-32k.Q4_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [orcamaid-v3-13b-32k.Q4_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [orcamaid-v3-13b-32k.Q4_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [orcamaid-v3-13b-32k.Q5_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [orcamaid-v3-13b-32k.Q5_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [orcamaid-v3-13b-32k.Q5_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [orcamaid-v3-13b-32k.Q6_K.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [orcamaid-v3-13b-32k.Q8_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-v3-13B-32k-GGUF/blob/main/orcamaid-v3-13b-32k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/OrcaMaid-v3-13B-32k-GGUF and below it, a specific filename to download, such as: orcamaid-v3-13b-32k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/OrcaMaid-v3-13B-32k-GGUF orcamaid-v3-13b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/OrcaMaid-v3-13B-32k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OrcaMaid-v3-13B-32k-GGUF orcamaid-v3-13b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m orcamaid-v3-13b-32k.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./orcamaid-v3-13b-32k.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./orcamaid-v3-13b-32k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: ddh0's Orcamaid v3 13B 32K # OrcaMaid-v3-13b-32k This is the third version of OrcaMaid, a weighted gradient SLERP merge between Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and NeverSleep's [Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3). The goal of this merge is to create an unusually intelligent and human-like model especially for RP. The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs. ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {YOUR MESSAGE HERE} ### Response: {BOT MESSAGE HERE} ``` ### Misc. information - BOS token is `<s>` - EOS token is `</s>` - Native context length is `32768` via YaRN (original context length was `4096`) - Base model is Llama 2 - Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE) ### Thanks - Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) of [NeverSleep](https://huggingface.co/NeverSleep) for Noromaid <!-- original-model-card end -->
Brackly/alpaca-lora-7b
Brackly
2024-01-10T08:44:18Z
0
0
null
[ "dataset:yahma/alpaca-cleaned", "license:mit", "region:us" ]
null
2024-01-10T08:38:27Z
--- license: mit datasets: - yahma/alpaca-cleaned --- This repo contains a low-rank adapter for LLaMA-7b fit on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset. This version of the weights was trained with the following hyperparameters: - Epochs: 10 (load from best epoch) - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, k_proj, v_proj, o_proj That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=10 \ --cutoff_len=512 \ --group_by_length \ --output_dir='./lora-alpaca-512-qkvo' \ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \ --lora_r=16 \ --micro_batch_size=8 ``` Instructions for running it can be found at https://github.com/tloen/alpaca-lora.
mschuh/PubChemDeBERTa
mschuh
2024-01-10T08:39:08Z
176
1
transformers
[ "transformers", "pytorch", "deberta-v2", "feature-extraction", "PubChem", "chemistry", "biology", "deberta-v3", "sentence-similarity", "en", "arxiv:2401.04478", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-07T08:57:06Z
--- language: - en metrics: - perplexity pipeline_tag: sentence-similarity author: Maximilian G. Schuh tags: - PubChem - chemistry - biology - deberta-v3 --- # TwinBooster ## PubChemDeBERTa: Fine-tuned DeBERTa V3 base on PubChem bioassay corpora [![arXiv](https://img.shields.io/badge/arXiv-2401.04478-b31b1b.svg)](https://arxiv.org/abs/2401.04478) ### Synergising Large Language Models with Barlow Twins and Gradient Boosting for Enhanced Molecular Property Prediction Maximilian G. Schuh, Davide Boldini, Stephan A. Sieber @ Chair of Organic Chemistry II, TUM School of Natural Sciences, Technical University of Munich **Abstract** The success of drug discovery and development relies on the precise prediction of molecular activities and properties. While in silico molecular property prediction has shown remarkable potential, its use has been limited so far to assays for which large amounts of data are available. In this study, we use a fine-tuned large language model to integrate biological assays based on their textual information, coupled with Barlow Twins, a Siamese neural network using a novel self-supervised learning approach. This architecture uses both assay information and molecular fingerprints to extract the true molecular information. TwinBooster enables the prediction of properties of unseen bioassays and molecules by providing state-of-the-art zero-shot learning tasks. Remarkably, our artificial intelligence pipeline shows excellent performance on the FS-Mol benchmark. This breakthrough demonstrates the application of deep learning to critical property prediction tasks where data is typically scarce. By accelerating the early identification of active molecules in drug discovery and development, this method has the potential to help streamline the identification of novel therapeutics.
haryoaw/scenario-TCR-data-glue-cola-model-bert-base-uncased
haryoaw
2024-01-10T08:30:45Z
93
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-29T21:06:09Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: scenario-TCR-data-glue-cola-model-bert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scenario-TCR-data-glue-cola-model-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9233 - Matthews Correlation: 0.5556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6969 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 268 | 0.4170 | 0.5507 | | 0.3968 | 2.0 | 536 | 0.4077 | 0.5769 | | 0.3968 | 3.0 | 804 | 0.6058 | 0.5667 | | 0.1661 | 4.0 | 1072 | 0.8344 | 0.5396 | | 0.1661 | 5.0 | 1340 | 0.7354 | 0.5402 | | 0.1119 | 6.0 | 1608 | 0.7466 | 0.5593 | | 0.1119 | 7.0 | 1876 | 0.9233 | 0.5556 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.0 - Tokenizers 0.15.0
ImPikov/game-character-v2
ImPikov
2024-01-10T08:18:52Z
424
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-10T08:14:39Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Game_Character_v2 Dreambooth model trained by ImPikov with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
magixn/ppo-Huggy
magixn
2024-01-10T08:09:00Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-10T08:08:53Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: magixn/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
zym1/lora
zym1
2024-01-10T07:54:56Z
0
1
null
[ "safetensors", "region:us" ]
null
2023-07-18T07:13:01Z
--- license: cc-by-4.0 ---个人的Lora模型,主要为《战舰少女R》中的人物。