modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-30 06:27:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
527 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-30 06:27:12
card
stringlengths
11
1.01M
tresbien1/q-FrozenLake-v1-4x4-noSlippery
tresbien1
2023-12-20T10:40:49Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T10:40:46Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="tresbien1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
anjith672/sdxlStyleFinetune
anjith672
2023-12-20T10:40:02Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-12-20T10:40:01Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a sks style tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
amithapa/q-taxi-v3
amithapa
2023-12-20T10:39:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T10:20:51Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="amithapa/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
ntc-ai/SDXL-LoRA-slider.fireworks
ntc-ai
2023-12-20T10:37:38Z
21
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2023-12-20T10:37:34Z
--- language: - en thumbnail: "images/evaluate/fireworks...night/fireworks_17_3.0.png" widget: - text: fireworks output: url: images/fireworks_17_3.0.png - text: fireworks output: url: images/fireworks_19_3.0.png - text: fireworks output: url: images/fireworks_20_3.0.png - text: fireworks output: url: images/fireworks_21_3.0.png - text: fireworks output: url: images/fireworks_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "fireworks" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - fireworks (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/fireworks_17_-3.0.png" width=256 height=256 /> | <img src="images/fireworks_17_0.0.png" width=256 height=256 /> | <img src="images/fireworks_17_3.0.png" width=256 height=256 /> | | <img src="images/fireworks_19_-3.0.png" width=256 height=256 /> | <img src="images/fireworks_19_0.0.png" width=256 height=256 /> | <img src="images/fireworks_19_3.0.png" width=256 height=256 /> | | <img src="images/fireworks_20_-3.0.png" width=256 height=256 /> | <img src="images/fireworks_20_0.0.png" width=256 height=256 /> | <img src="images/fireworks_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` fireworks ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.fireworks', weight_name='fireworks.safetensors', adapter_name="fireworks") # Activate the LoRA pipe.set_adapters(["fireworks"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, fireworks" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
Penguin-N/ppo-SnowballTarget
Penguin-N
2023-12-20T10:35:46Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-12-20T09:40:36Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # Watch your agent playing 👀 For this step it’s simple: 1. Remember your repo-id 2. Go here: https://huggingface.co/spaces/ThomasSimonini/ML-Agents-SnowballTarget 3. Launch the game and put it in full screen by clicking on the bottom right button # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Penguin-N/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Penguin-N/Paramids
Penguin-N
2023-12-20T10:33:46Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-12-20T10:27:39Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # Watch your agent playing 👀 👉 [https://huggingface.co/spaces/unity/ML-Agents-Pyramids](https://huggingface.co/spaces/unity/ML-Agents-Pyramids) # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Penguin-N/Paramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AICODER009/xlm-roberta-base-finetuned-panx-fr
AICODER009
2023-12-20T10:32:30Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-20T10:29:17Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8446995273463875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2731 - F1: 0.8447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5823 | 1.0 | 191 | 0.3000 | 0.7951 | | 0.264 | 2.0 | 382 | 0.2782 | 0.8208 | | 0.177 | 3.0 | 573 | 0.2731 | 0.8447 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/dragon-mistral-7b-v0-8.0bpw-h8-exl2
LoneStriker
2023-12-20T10:26:59Z
4
0
transformers
[ "transformers", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T10:23:56Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
biennh/me_lora_xl
biennh
2023-12-20T10:24:37Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T10:24:29Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of ME person license: openrail++ --- # SDXL LoRA DreamBooth - biennh/me_lora_xl <Gallery /> ## Model description These are biennh/me_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of ME person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](biennh/me_lora_xl/tree/main) them in the Files & versions tab.
jiangdaniel/dog_lora
jiangdaniel
2023-12-20T10:23:20Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-20T10:09:09Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - jiangdaniel/dog_lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
LoneStriker/LMCocktail-10.7B-v1-6.0bpw-h6-exl2
LoneStriker
2023-12-20T10:22:43Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2311.13534", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:54:15Z
--- license: llama2 --- # LM-cocktail 10.7B v1 This is a 50%-50% model of the SOLAR model and meow. https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 https://huggingface.co/rishiraj/meow who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20. # Code The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534 Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git Merging scripts available under the [./scripts](./scripts) folder # Result The SOLAR model is the first model <30B that can answer this question from my test: ``` What will AI be like in the year 1010 A.D? ``` without hullicinating into 1010 A.D is a future time (like other llama2 models) Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough. ### SOLAR 10.7B output ![img](./assets/SOLAR.png) ### LMCocktail 10.7B output1 ![img](./assets/SOLAR_mixed.png) ### LMCocktail 10.7B output2 ![img](./assets/SOLAR_mixed2.png)
zachwatanabe/test
zachwatanabe
2023-12-20T10:22:05Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-20T10:19:56Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0314 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:---:|:--------:| | 0.0381 | 0.13 | 500 | 0.0777 | 0.0 | 0.0 | 0.0 | 0.9745 | | 0.0669 | 0.26 | 1000 | 0.0545 | 0.0 | 0.0 | 0.0 | 0.9790 | | 0.0595 | 0.39 | 1500 | 0.0545 | 0.0 | 0.0 | 0.0 | 0.9803 | | 0.054 | 0.52 | 2000 | 0.0555 | 0.0 | 0.0 | 0.0 | 0.9796 | | 0.0502 | 0.65 | 2500 | 0.0451 | 0.0 | 0.0 | 0.0 | 0.9828 | | 0.0474 | 0.78 | 3000 | 0.0486 | 0.0 | 0.0 | 0.0 | 0.9818 | | 0.0458 | 0.92 | 3500 | 0.0417 | 0.0 | 0.0 | 0.0 | 0.9836 | | 0.0415 | 1.05 | 4000 | 0.0440 | 0.0 | 0.0 | 0.0 | 0.9827 | | 0.0372 | 1.18 | 4500 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.9839 | | 0.0391 | 1.31 | 5000 | 0.0442 | 0.0 | 0.0 | 0.0 | 0.9839 | | 0.0368 | 1.44 | 5500 | 0.0377 | 0.0 | 0.0 | 0.0 | 0.9856 | | 0.0388 | 1.57 | 6000 | 0.0417 | 0.0 | 0.0 | 0.0 | 0.9846 | | 0.0351 | 1.7 | 6500 | 0.0363 | 0.0 | 0.0 | 0.0 | 0.9857 | | 0.0357 | 1.83 | 7000 | 0.0383 | 0.0 | 0.0 | 0.0 | 0.9858 | | 0.0336 | 1.96 | 7500 | 0.0371 | 0.0 | 0.0 | 0.0 | 0.9860 | | 0.0309 | 2.09 | 8000 | 0.0373 | 0.0 | 0.0 | 0.0 | 0.9859 | | 0.0288 | 2.22 | 8500 | 0.0355 | 0.0 | 0.0 | 0.0 | 0.9870 | | 0.0288 | 2.35 | 9000 | 0.0359 | 0.0 | 0.0 | 0.0 | 0.9867 | | 0.0285 | 2.49 | 9500 | 0.0369 | 0.0 | 0.0 | 0.0 | 0.9872 | | 0.0307 | 2.62 | 10000 | 0.0322 | 0.0 | 0.0 | 0.0 | 0.9880 | | 0.028 | 2.75 | 10500 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9886 | | 0.0246 | 2.88 | 11000 | 0.0326 | 0.0 | 0.0 | 0.0 | 0.9881 | | 0.0267 | 3.01 | 11500 | 0.0346 | 0.0 | 0.0 | 0.0 | 0.9882 | | 0.022 | 3.14 | 12000 | 0.0316 | 0.0 | 0.0 | 0.0 | 0.9889 | | 0.0218 | 3.27 | 12500 | 0.0357 | 0.0 | 0.0 | 0.0 | 0.9883 | | 0.0217 | 3.4 | 13000 | 0.0363 | 0.0 | 0.0 | 0.0 | 0.9883 | | 0.0208 | 3.53 | 13500 | 0.0340 | 0.0 | 0.0 | 0.0 | 0.9894 | | 0.0223 | 3.66 | 14000 | 0.0304 | 0.0 | 0.0 | 0.0 | 0.9892 | | 0.0232 | 3.79 | 14500 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.9894 | | 0.0229 | 3.92 | 15000 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9901 | | 0.0192 | 4.06 | 15500 | 0.0310 | 0.0 | 0.0 | 0.0 | 0.9905 | | 0.0178 | 4.19 | 16000 | 0.0345 | 0.0 | 0.0 | 0.0 | 0.9897 | | 0.0178 | 4.32 | 16500 | 0.0309 | 0.0 | 0.0 | 0.0 | 0.9902 | | 0.0173 | 4.45 | 17000 | 0.0328 | 0.0 | 0.0 | 0.0 | 0.9904 | | 0.0176 | 4.58 | 17500 | 0.0316 | 0.0 | 0.0 | 0.0 | 0.9908 | | 0.017 | 4.71 | 18000 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9912 | | 0.0163 | 4.84 | 18500 | 0.0329 | 0.0 | 0.0 | 0.0 | 0.9909 | | 0.018 | 4.97 | 19000 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.9910 | | 0.0143 | 5.1 | 19500 | 0.0367 | 0.0 | 0.0 | 0.0 | 0.9903 | | 0.0144 | 5.23 | 20000 | 0.0317 | 0.0 | 0.0 | 0.0 | 0.9915 | | 0.0158 | 5.36 | 20500 | 0.0290 | 0.0 | 0.0 | 0.0 | 0.9917 | | 0.0143 | 5.49 | 21000 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.9917 | | 0.0137 | 5.63 | 21500 | 0.0310 | 0.0 | 0.0 | 0.0 | 0.9913 | | 0.0135 | 5.76 | 22000 | 0.0310 | 0.0 | 0.0 | 0.0 | 0.9913 | | 0.0128 | 5.89 | 22500 | 0.0290 | 0.0 | 0.0 | 0.0 | 0.9917 | | 0.0132 | 6.02 | 23000 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.9921 | | 0.0124 | 6.15 | 23500 | 0.0274 | 0.0 | 0.0 | 0.0 | 0.9921 | | 0.0114 | 6.28 | 24000 | 0.0300 | 0.0 | 0.0 | 0.0 | 0.9921 | | 0.0111 | 6.41 | 24500 | 0.0291 | 0.0 | 0.0 | 0.0 | 0.9922 | | 0.0109 | 6.54 | 25000 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9923 | | 0.0117 | 6.67 | 25500 | 0.0328 | 0.0 | 0.0 | 0.0 | 0.9921 | | 0.0112 | 6.8 | 26000 | 0.0293 | 0.0 | 0.0 | 0.0 | 0.9924 | | 0.012 | 6.93 | 26500 | 0.0300 | 0.0 | 0.0 | 0.0 | 0.9924 | | 0.0102 | 7.06 | 27000 | 0.0330 | 0.0 | 0.0 | 0.0 | 0.9921 | | 0.0094 | 7.2 | 27500 | 0.0323 | 0.0 | 0.0 | 0.0 | 0.9922 | | 0.0091 | 7.33 | 28000 | 0.0309 | 0.0 | 0.0 | 0.0 | 0.9924 | | 0.0087 | 7.46 | 28500 | 0.0331 | 0.0 | 0.0 | 0.0 | 0.9920 | | 0.0091 | 7.59 | 29000 | 0.0332 | 0.0 | 0.0 | 0.0 | 0.9923 | | 0.0095 | 7.72 | 29500 | 0.0298 | 0.0 | 0.0 | 0.0 | 0.9925 | | 0.0083 | 7.85 | 30000 | 0.0303 | 0.0 | 0.0 | 0.0 | 0.9929 | | 0.0097 | 7.98 | 30500 | 0.0298 | 0.0 | 0.0 | 0.0 | 0.9928 | | 0.0069 | 8.11 | 31000 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.9926 | | 0.0086 | 8.24 | 31500 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.9929 | | 0.0079 | 8.37 | 32000 | 0.0306 | 0.0 | 0.0 | 0.0 | 0.9929 | | 0.0065 | 8.5 | 32500 | 0.0317 | 0.0 | 0.0 | 0.0 | 0.9926 | | 0.0072 | 8.63 | 33000 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9927 | | 0.0082 | 8.77 | 33500 | 0.0306 | 0.0 | 0.0 | 0.0 | 0.9929 | | 0.0086 | 8.9 | 34000 | 0.0312 | 0.0 | 0.0 | 0.0 | 0.9931 | | 0.0079 | 9.03 | 34500 | 0.0329 | 0.0 | 0.0 | 0.0 | 0.9929 | | 0.0061 | 9.16 | 35000 | 0.0326 | 0.0 | 0.0 | 0.0 | 0.9928 | | 0.0074 | 9.29 | 35500 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.9928 | | 0.0068 | 9.42 | 36000 | 0.0310 | 0.0 | 0.0 | 0.0 | 0.9931 | | 0.0059 | 9.55 | 36500 | 0.0318 | 0.0 | 0.0 | 0.0 | 0.9930 | | 0.0064 | 9.68 | 37000 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.9933 | | 0.0063 | 9.81 | 37500 | 0.0308 | 0.0 | 0.0 | 0.0 | 0.9930 | | 0.0062 | 9.94 | 38000 | 0.0311 | 0.0 | 0.0 | 0.0 | 0.9931 | | 0.0058 | 10.07 | 38500 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.9932 | | 0.0051 | 10.2 | 39000 | 0.0316 | 0.0 | 0.0 | 0.0 | 0.9933 | | 0.0065 | 10.33 | 39500 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.9933 | | 0.0059 | 10.47 | 40000 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.9933 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/LMCocktail-10.7B-v1-4.0bpw-h6-exl2
LoneStriker
2023-12-20T10:21:14Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2311.13534", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:33:34Z
--- license: llama2 --- # LM-cocktail 10.7B v1 This is a 50%-50% model of the SOLAR model and meow. https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 https://huggingface.co/rishiraj/meow who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20. # Code The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534 Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git Merging scripts available under the [./scripts](./scripts) folder # Result The SOLAR model is the first model <30B that can answer this question from my test: ``` What will AI be like in the year 1010 A.D? ``` without hullicinating into 1010 A.D is a future time (like other llama2 models) Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough. ### SOLAR 10.7B output ![img](./assets/SOLAR.png) ### LMCocktail 10.7B output1 ![img](./assets/SOLAR_mixed.png) ### LMCocktail 10.7B output2 ![img](./assets/SOLAR_mixed2.png)
LoneStriker/LMCocktail-10.7B-v1-3.0bpw-h6-exl2
LoneStriker
2023-12-20T10:20:45Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2311.13534", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:23:37Z
--- license: llama2 --- # LM-cocktail 10.7B v1 This is a 50%-50% model of the SOLAR model and meow. https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 https://huggingface.co/rishiraj/meow who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20. # Code The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534 Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git Merging scripts available under the [./scripts](./scripts) folder # Result The SOLAR model is the first model <30B that can answer this question from my test: ``` What will AI be like in the year 1010 A.D? ``` without hullicinating into 1010 A.D is a future time (like other llama2 models) Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough. ### SOLAR 10.7B output ![img](./assets/SOLAR.png) ### LMCocktail 10.7B output1 ![img](./assets/SOLAR_mixed.png) ### LMCocktail 10.7B output2 ![img](./assets/SOLAR_mixed2.png)
LoneStriker/dragon-mistral-7b-v0-3.0bpw-h6-exl2
LoneStriker
2023-12-20T10:20:27Z
4
0
transformers
[ "transformers", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:44:22Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
LoneStriker/dragon-mistral-7b-v0-4.0bpw-h6-exl2
LoneStriker
2023-12-20T10:20:22Z
4
0
transformers
[ "transformers", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:54:30Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
LoneStriker/dragon-mistral-7b-v0-6.0bpw-h6-exl2
LoneStriker
2023-12-20T10:20:17Z
4
0
transformers
[ "transformers", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T10:14:15Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
BOBBYBEAR1/a
BOBBYBEAR1
2023-12-20T10:10:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-20T10:10:51Z
--- license: creativeml-openrail-m ---
AICODER009/xlm-roberta-base-finetuned-panx-de
AICODER009
2023-12-20T10:05:36Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-19T23:09:33Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8627450980392157 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1407 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2585 | 1.0 | 525 | 0.1537 | 0.8285 | | 0.1274 | 2.0 | 1050 | 0.1435 | 0.8474 | | 0.0784 | 3.0 | 1575 | 0.1407 | 0.8627 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
NbAiLabBeta/nb-whisper-tiny-semantic
NbAiLabBeta
2023-12-20T09:48:59Z
5
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-tiny", "base_model:quantized:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-08T15:04:36Z
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-tiny tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # Finetuned Semantic model. This model is trained 200 additional steps on top of the main model. The output from this model is less verbatim than when using the main model. The style might be more suited for instance for subtitling of videos since the goal is to use as few words as possible to express the essence of what is said. # NB-Whisper Tiny (Release Candidate) **IMPORTANT:** These models are currently Release Candidates. We are in the final stages of testing. If everything proceeds smoothly, we plan to officially release the models later this month. Introducing the **_Norwegian NB-Whisper Tiny model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLabBeta/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLabBeta/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLabBeta/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLabBeta/nb-whisper-large) | ### Specialised Models While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. - **Semantic version**: This variant focuses less on verbatim accuracy but captures the essence of content, ideal for meeting minutes and subtitling. | Model Size | Parameters | Verbatim version | Semantic version | |------------|------------|------------|------------------| | Tiny | 39M | [Tiny - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-verbatim) | [Tiny - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-semantic) | | Base | 74M | [Base - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-base-verbatim) | [Base - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-base-semantic) | | Small | 244M | [Small - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-small-verbatim) | [Small - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-small-semantic) | | Medium | 769M | [Medium - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-verbatim) | [Medium - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-semantic) | | Large | 1550M | [Large - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-large-verbatim) | [Large - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLabBeta/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-tiny-semantic") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-tiny/resolve/main/ggml-model.bin -O models/nb-tiny-ggml-model.bin wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-tiny/resolve/main/ggml-model-q5_0.bin -O models/nb-tiny-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-tiny-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-tiny-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-tiny-semantic --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from Språkbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Tiny model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:ailab@nb.no">ailab@nb.no</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
NbAiLabBeta/nb-whisper-large-des23
NbAiLabBeta
2023-12-20T09:48:23Z
10
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-large", "base_model:quantized:openai/whisper-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-05T23:09:58Z
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-large tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # NB-Whisper Large (Release Candidate) **IMPORTANT:** These models are currently Release Candidates. We are in the final stages of testing. If everything proceeds smoothly, we plan to officially release the models later this month. Introducing the **_Norwegian NB-Whisper Large model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLabBeta/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLabBeta/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLabBeta/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLabBeta/nb-whisper-large) | ### Specialised Models While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. - **Semantic version**: This variant focuses less on verbatim accuracy but captures the essence of content, ideal for meeting minutes and subtitling. | Model Size | Parameters | Verbatim version | Semantic version | |------------|------------|------------|------------------| | Tiny | 39M | [Tiny - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-verbatim) | [Tiny - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-semantic) | | Base | 74M | [Base - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-base-verbatim) | [Base - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-base-semantic) | | Small | 244M | [Small - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-small-verbatim) | [Small - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-small-semantic) | | Medium | 769M | [Medium - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-verbatim) | [Medium - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-semantic) | | Large | 1550M | [Large - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-large-verbatim) | [Large - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-large](https://huggingface.co/openai/whisper-large) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLabBeta/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-large") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-large/resolve/main/ggml-model.bin -O models/nb-large-ggml-model.bin wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-large/resolve/main/ggml-model-q5_0.bin -O models/nb-large-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-large-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-large-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-large --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from Språkbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Large model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:ailab@nb.no">ailab@nb.no</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
NbAiLabBeta/nb-whisper-base-semantic
NbAiLabBeta
2023-12-20T09:48:11Z
32
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-base", "base_model:quantized:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-07T06:14:04Z
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-base tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # Finetuned Semantic model. This model is trained 200 additional steps on top of the main model. The output from this model is less verbatim than when using the main model. The style might be more suited for instance for subtitling of videos since the goal is to use as few words as possible to express the essence of what is said. # NB-Whisper Base (Release Candidate) **IMPORTANT:** These models are currently Release Candidates. We are in the final stages of testing. If everything proceeds smoothly, we plan to officially release the models later this month. Introducing the **_Norwegian NB-Whisper Base model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLabBeta/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLabBeta/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLabBeta/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLabBeta/nb-whisper-large) | ### Specialised Models While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. - **Semantic version**: This variant focuses less on verbatim accuracy but captures the essence of content, ideal for meeting minutes and subtitling. | Model Size | Parameters | Verbatim version | Semantic version | |------------|------------|------------|------------------| | Tiny | 39M | [Tiny - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-verbatim) | [Tiny - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-semantic) | | Base | 74M | [Base - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-base-verbatim) | [Base - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-base-semantic) | | Small | 244M | [Small - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-small-verbatim) | [Small - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-small-semantic) | | Medium | 769M | [Medium - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-verbatim) | [Medium - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-semantic) | | Large | 1550M | [Large - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-large-verbatim) | [Large - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-base](https://huggingface.co/openai/whisper-base) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLabBeta/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-base-semantic") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-base/resolve/main/ggml-model.bin -O models/nb-base-ggml-model.bin wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-base/resolve/main/ggml-model-q5_0.bin -O models/nb-base-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-base-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-base-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-base-semantic --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from Språkbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Base model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:ailab@nb.no">ailab@nb.no</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
hkivancoral/smids_10x_deit_small_rms_00001_fold2
hkivancoral
2023-12-20T09:46:47Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-small-patch16-224", "base_model:finetune:facebook/deit-small-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-20T08:39:40Z
--- license: apache-2.0 base_model: facebook/deit-small-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_10x_deit_small_rms_00001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8951747088186356 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1223 - Accuracy: 0.8952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2125 | 1.0 | 750 | 0.2971 | 0.8636 | | 0.1007 | 2.0 | 1500 | 0.3569 | 0.8902 | | 0.033 | 3.0 | 2250 | 0.4786 | 0.8852 | | 0.0414 | 4.0 | 3000 | 0.6308 | 0.8719 | | 0.0169 | 5.0 | 3750 | 0.7881 | 0.8769 | | 0.0209 | 6.0 | 4500 | 0.8756 | 0.8802 | | 0.0232 | 7.0 | 5250 | 0.7942 | 0.8785 | | 0.0001 | 8.0 | 6000 | 0.8024 | 0.8885 | | 0.0037 | 9.0 | 6750 | 0.9766 | 0.8852 | | 0.0663 | 10.0 | 7500 | 0.9288 | 0.8785 | | 0.0416 | 11.0 | 8250 | 1.0051 | 0.8835 | | 0.0257 | 12.0 | 9000 | 1.1036 | 0.8752 | | 0.0003 | 13.0 | 9750 | 0.9284 | 0.8835 | | 0.0007 | 14.0 | 10500 | 0.9766 | 0.8752 | | 0.0009 | 15.0 | 11250 | 1.0060 | 0.8869 | | 0.024 | 16.0 | 12000 | 0.9566 | 0.8918 | | 0.0002 | 17.0 | 12750 | 0.9308 | 0.8985 | | 0.0226 | 18.0 | 13500 | 0.9878 | 0.8952 | | 0.0002 | 19.0 | 14250 | 1.0344 | 0.8802 | | 0.0 | 20.0 | 15000 | 1.0012 | 0.8902 | | 0.0 | 21.0 | 15750 | 1.0757 | 0.8852 | | 0.0197 | 22.0 | 16500 | 1.1327 | 0.8918 | | 0.0059 | 23.0 | 17250 | 1.1959 | 0.8785 | | 0.014 | 24.0 | 18000 | 0.9244 | 0.8918 | | 0.0 | 25.0 | 18750 | 1.0134 | 0.8952 | | 0.0001 | 26.0 | 19500 | 1.2273 | 0.8735 | | 0.0081 | 27.0 | 20250 | 1.2216 | 0.8735 | | 0.0 | 28.0 | 21000 | 1.1304 | 0.8769 | | 0.0 | 29.0 | 21750 | 0.9499 | 0.8902 | | 0.0 | 30.0 | 22500 | 1.0368 | 0.8885 | | 0.0 | 31.0 | 23250 | 1.0392 | 0.8852 | | 0.0038 | 32.0 | 24000 | 1.2288 | 0.8835 | | 0.0 | 33.0 | 24750 | 1.1678 | 0.8952 | | 0.0 | 34.0 | 25500 | 1.0162 | 0.8918 | | 0.0 | 35.0 | 26250 | 1.0770 | 0.8918 | | 0.0 | 36.0 | 27000 | 1.0678 | 0.8902 | | 0.0067 | 37.0 | 27750 | 1.0739 | 0.8935 | | 0.0 | 38.0 | 28500 | 1.1577 | 0.8935 | | 0.0 | 39.0 | 29250 | 1.1277 | 0.8935 | | 0.0 | 40.0 | 30000 | 1.1396 | 0.8918 | | 0.0 | 41.0 | 30750 | 1.1296 | 0.8952 | | 0.0 | 42.0 | 31500 | 1.1324 | 0.8935 | | 0.0 | 43.0 | 32250 | 1.1390 | 0.8918 | | 0.0 | 44.0 | 33000 | 1.1307 | 0.8952 | | 0.0025 | 45.0 | 33750 | 1.1302 | 0.8918 | | 0.0 | 46.0 | 34500 | 1.1293 | 0.8935 | | 0.0 | 47.0 | 35250 | 1.1264 | 0.8935 | | 0.0 | 48.0 | 36000 | 1.1267 | 0.8952 | | 0.0 | 49.0 | 36750 | 1.1233 | 0.8952 | | 0.0 | 50.0 | 37500 | 1.1223 | 0.8952 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
LoneStriker/firefly-mixtral-8x7b-5.0bpw-h6-exl2
LoneStriker
2023-12-20T09:46:17Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:30:44Z
--- license: apache-2.0 language: - en --- This model is finetuend on "mistralai/Mixtral-8x7B-v0.1" with Firefly ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name_or_path = 'YeungNLP/firefly-mixtral-8x7b' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 model = AutoModelForCausalLM.from_pretrained( model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model = model.eval() tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) text = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions." inst_begin_tokens = tokenizer.encode('[INST]', add_special_tokens=False) inst_end_tokens = tokenizer.encode('[/INST]', add_special_tokens=False) human_tokens = tokenizer.encode(text, add_special_tokens=False) input_ids = [tokenizer.bos_token_id] + inst_begin_tokens + human_tokens + inst_end_tokens # input_ids = human_tokens input_ids = torch.tensor([input_ids], dtype=torch.long).cuda() with torch.no_grad(): outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) outputs = outputs.tolist()[0][len(input_ids[0]):] response = tokenizer.decode(outputs) response = response.strip().replace(tokenizer.eos_token, "").strip() print("Chatbot:{}".format(response)) ```
Ahmed107/hamsa-lora-v7
Ahmed107
2023-12-20T09:46:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:arbml/whisper-medium-ar", "base_model:adapter:arbml/whisper-medium-ar", "region:us" ]
null
2023-12-20T09:45:58Z
--- library_name: peft base_model: arbml/whisper-medium-ar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Supergene/japanese-test
Supergene
2023-12-20T09:45:01Z
2
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-12-20T08:30:45Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
ziggyputtar/RNR-Testing_01
ziggyputtar
2023-12-20T09:43:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-20T09:08:42Z
--- license: creativeml-openrail-m ---
chargoddard/MixtralRPChat-ZLoss
chargoddard
2023-12-20T09:37:32Z
23
26
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "en", "dataset:Open-Orca/SlimOrca", "dataset:lemonilia/LimaRP", "dataset:chargoddard/rpguild", "dataset:chargoddard/summarize_from_feedback_alpaca", "dataset:HuggingFaceH4/no_robots", "dataset:chargoddard/coedit-reworded", "arxiv:2202.08906", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-v0.1", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T08:25:39Z
--- license: cc-by-nc-4.0 datasets: - Open-Orca/SlimOrca - lemonilia/LimaRP - chargoddard/rpguild - chargoddard/summarize_from_feedback_alpaca - HuggingFaceH4/no_robots - chargoddard/coedit-reworded language: - en tags: - mixtral base_model: mistralai/Mixtral-8x7B-v0.1 --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRA tuned from [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). My main reason for training this model was to investigate using an altered router balancing loss combined with the z-loss introduced in [ST-MoE: Designing Stable and Transferable Sparse Expert Models](https://arxiv.org/abs/2202.08906). The result is pretty decent, I think! It does a good job of respecting character information in system prompts and performed adequately on a few simple coding tasks. To train this I used a custom branch of Transformers that adds z-loss and reimplements the router balancing loss based on the version in [MegaBlocks](https://github.com/stanford-futuredata/megablocks). The config used with my custom hacked-up branch of axolotl is available [here](https://huggingface.co/chargoddard/MixtralRPChat-ZLoss/blob/main/axolotl_config.yml). Uses my favorite non-ChatML token-economic chat prompt format. Messages should be prefixed with `" ***System:"`, `" ***Query:"`, or `" ***Response:"` for system, user, and model messages respectively. No newlines are necessary but the space before the triple asterisk is mandatory.
kitty7779/animeIllustDiffusion_v06
kitty7779
2023-12-20T09:28:07Z
1
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-19T23:55:01Z
https://civitai.com/models/124189?modelVersionId=266214
LoneStriker/firefly-mixtral-8x7b-4.0bpw-h6-exl2
LoneStriker
2023-12-20T09:25:14Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T09:14:22Z
--- license: apache-2.0 language: - en --- This model is finetuend on "mistralai/Mixtral-8x7B-v0.1" with Firefly ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name_or_path = 'YeungNLP/firefly-mixtral-8x7b' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 model = AutoModelForCausalLM.from_pretrained( model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model = model.eval() tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) text = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions." inst_begin_tokens = tokenizer.encode('[INST]', add_special_tokens=False) inst_end_tokens = tokenizer.encode('[/INST]', add_special_tokens=False) human_tokens = tokenizer.encode(text, add_special_tokens=False) input_ids = [tokenizer.bos_token_id] + inst_begin_tokens + human_tokens + inst_end_tokens # input_ids = human_tokens input_ids = torch.tensor([input_ids], dtype=torch.long).cuda() with torch.no_grad(): outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) outputs = outputs.tolist()[0][len(input_ids[0]):] response = tokenizer.decode(outputs) response = response.strip().replace(tokenizer.eos_token, "").strip() print("Chatbot:{}".format(response)) ```
NiyaziZaid/model_2
NiyaziZaid
2023-12-20T09:21:22Z
4
0
peft
[ "peft", "arxiv:1910.09700", "base_model:google-t5/t5-small", "base_model:adapter:google-t5/t5-small", "region:us" ]
null
2023-10-18T17:03:58Z
--- library_name: peft base_model: t5-small --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
tdh87/StoryTime
tdh87
2023-12-20T09:14:38Z
1
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:openchat/openchat-3.5-1210", "base_model:adapter:openchat/openchat-3.5-1210", "region:us" ]
null
2023-12-20T08:48:58Z
--- library_name: peft base_model: openchat/openchat-3.5-1210 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/firefly-mixtral-8x7b-2.4bpw-h6-exl2
LoneStriker
2023-12-20T09:14:02Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T08:19:24Z
--- license: apache-2.0 language: - en --- This model is finetuend on "mistralai/Mixtral-8x7B-v0.1" with Firefly ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name_or_path = 'YeungNLP/firefly-mixtral-8x7b' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 model = AutoModelForCausalLM.from_pretrained( model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model = model.eval() tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) text = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions." inst_begin_tokens = tokenizer.encode('[INST]', add_special_tokens=False) inst_end_tokens = tokenizer.encode('[/INST]', add_special_tokens=False) human_tokens = tokenizer.encode(text, add_special_tokens=False) input_ids = [tokenizer.bos_token_id] + inst_begin_tokens + human_tokens + inst_end_tokens # input_ids = human_tokens input_ids = torch.tensor([input_ids], dtype=torch.long).cuda() with torch.no_grad(): outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) outputs = outputs.tolist()[0][len(input_ids[0]):] response = tokenizer.decode(outputs) response = response.strip().replace(tokenizer.eos_token, "").strip() print("Chatbot:{}".format(response)) ```
LoneStriker/firefly-mixtral-8x7b-3.5bpw-h6-exl2
LoneStriker
2023-12-20T09:13:58Z
6
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T08:50:58Z
--- license: apache-2.0 language: - en --- This model is finetuend on "mistralai/Mixtral-8x7B-v0.1" with Firefly ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name_or_path = 'YeungNLP/firefly-mixtral-8x7b' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 model = AutoModelForCausalLM.from_pretrained( model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model = model.eval() tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) text = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions." inst_begin_tokens = tokenizer.encode('[INST]', add_special_tokens=False) inst_end_tokens = tokenizer.encode('[/INST]', add_special_tokens=False) human_tokens = tokenizer.encode(text, add_special_tokens=False) input_ids = [tokenizer.bos_token_id] + inst_begin_tokens + human_tokens + inst_end_tokens # input_ids = human_tokens input_ids = torch.tensor([input_ids], dtype=torch.long).cuda() with torch.no_grad(): outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) outputs = outputs.tolist()[0][len(input_ids[0]):] response = tokenizer.decode(outputs) response = response.strip().replace(tokenizer.eos_token, "").strip() print("Chatbot:{}".format(response)) ```
gd1m3y/Mistral_financial_vertex
gd1m3y
2023-12-20T09:08:33Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2023-12-20T09:08:18Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
softaken/eml-duplicate-remover
softaken
2023-12-20T09:06:43Z
0
0
null
[ "region:us" ]
null
2023-12-20T08:59:02Z
We are going to explain our Softaken EML Duplicate Remover Tool, which quickly removes all duplicate items from the EML files, without losing any single bit of data. We include several advanced technical features in this utility to eliminate all the duplicate items very carefully without any modification in its actual structure. It eliminates duplicate emails from eml even in bulk. Our tool also performs with EMLX files. It also creates new files after removing the same items from EML/EMLX. Users no required to install the emails application to eliminate the replica data from it. This software also works equally with all email clients that support eml/emlx file formats such as Windows Live Mail, Thunderbird, Outlook Express, AppleMail, etc. It's developed as user-friendly with a simple GUI, so any type of user wheater they are related to the technical field or not easily operate our software. For the satisfaction of users, we are offering free trial versions in which users can remove a few items free from EML/EMLX files, and know the working process of Our EML Duplicate Remover. Key Features of EML Duplicate Remover Tool Quickly removes duplicate items from EML/EMLX Safe and secure techniques Preserve the actual data structure and maintain the data integrity Support all editions of Windows Operate with various email clients such as WLM, AppleMail, Thunderbird, etc. No need to install any other tools A free demo pack is available for all the users to know it's working process Visit Here :- https://www.softaken.com/eml-duplicate-remover
LoneStriker/WhiteRabbitNeo-13B-8.0bpw-h8-exl2-2
LoneStriker
2023-12-20T09:05:57Z
3
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T08:58:45Z
--- license: llama2 --- # Our Discord Server Join us at: https://discord.gg/NxTR8JqV # LLaMA-2 Licence + WhiteRabbitNeo Extended Version # Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # WhiteRabbitNeo <br> ![Synthia](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png) <br> WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. This 13B model is getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI. ``` import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "/home/migel/models/WhiteRabbitNeo" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, load_in_8bit=True, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" tot_system_prompt = """ Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. """ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" # print(conversation) json_data = {"prompt": user_input, "answer": answer} # print(json_data) # with open(output_file_path, "a") as output_file: # output_file.write(json.dumps(json_data) + "\n") ``` # Sample Conversation: You: How do I attack a wifi network? 1. **Network Scanning**: Use tools like `airodump-ng` to scan for nearby Wi-Fi networks. This will provide you with the SSID (Service Set Identifier) of the network you're interested in. ```bash airodump-ng wlan0mon ``` 2. **Network Capturing**: Once you have identified the target network, you can capture packets from it using the `airodump-ng` tool. This will allow you to capture handshake packets which are necessary for further attacks. ```bash airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon ``` 3. **Handshake Capture**: Wait for a client to connect to the network and capture the 4-way handshake. This will allow you to crack the network's password using tools like `aircrack-ng`. ```bash airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon ``` 4. **Password Cracking**: Once you have the handshake, use `aircrack-ng` to crack the password. ```bash aircrack-ng [Captured Handshake File] ``` 5. **Session Establishment**: If you have the password, you can connect to the network using your wireless card. ```bash iwconfig wlan0 essid [Network SSID] key [Network Password] dhclient wlan0 ``` 6. **Network Exploitation**: Once connected, you can perform further attacks, such as sniffing traffic (`tcpdump`), or launching a man-in-the-middle attack using tools like `ettercap` or `arpspoof`. ```bash tcpdump -i wlan0 -w [Packet Capture File] ettercap -T -i [Interface] -M arp [Target IP] ``` 7. **Post-Exploitation**: After gaining access to the network, you can exfiltrate data, execute commands, or perform other malicious activities. ```bash nc -lvp [Local Port] -e /bin/bash ``` 8. **Cleanup**: Remember to clean up any traces of your activities to avoid detection and legal consequences. ```bash airmon-ng stop wlan0mon ``` Please note that all these steps should be performed in a legal and ethical manner, and only on networks you have permission to test. Unauthorized access to computer networks is illegal and unethical.
dennissulejmanvisma/confluence-model-test
dennissulejmanvisma
2023-12-20T09:01:09Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:AI-Sweden-Models/gpt-sw3-126m-instruct", "base_model:finetune:AI-Sweden-Models/gpt-sw3-126m-instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T07:23:48Z
--- license: other base_model: AI-Sweden-Models/gpt-sw3-126m-instruct tags: - generated_from_trainer model-index: - name: confluence-model-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # confluence-model-test This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-126m-instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
hkivancoral/smids_10x_deit_tiny_adamax_001_fold5
hkivancoral
2023-12-20T08:59:08Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-20T06:53:59Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_10x_deit_tiny_adamax_001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.915 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8586 - Accuracy: 0.915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3782 | 1.0 | 750 | 0.3344 | 0.8667 | | 0.2904 | 2.0 | 1500 | 0.3574 | 0.8483 | | 0.2048 | 3.0 | 2250 | 0.3230 | 0.8817 | | 0.2 | 4.0 | 3000 | 0.3479 | 0.8933 | | 0.2233 | 5.0 | 3750 | 0.3431 | 0.8883 | | 0.1334 | 6.0 | 4500 | 0.3350 | 0.9017 | | 0.1268 | 7.0 | 5250 | 0.3335 | 0.8967 | | 0.077 | 8.0 | 6000 | 0.4549 | 0.8883 | | 0.0723 | 9.0 | 6750 | 0.3771 | 0.9067 | | 0.0426 | 10.0 | 7500 | 0.4455 | 0.9017 | | 0.0977 | 11.0 | 8250 | 0.4334 | 0.9067 | | 0.0237 | 12.0 | 9000 | 0.5437 | 0.9 | | 0.0358 | 13.0 | 9750 | 0.5148 | 0.885 | | 0.0032 | 14.0 | 10500 | 0.6045 | 0.9083 | | 0.0293 | 15.0 | 11250 | 0.6394 | 0.8933 | | 0.0156 | 16.0 | 12000 | 0.6836 | 0.89 | | 0.0548 | 17.0 | 12750 | 0.5770 | 0.9017 | | 0.0127 | 18.0 | 13500 | 0.6663 | 0.8983 | | 0.0203 | 19.0 | 14250 | 0.6791 | 0.905 | | 0.0154 | 20.0 | 15000 | 0.6990 | 0.905 | | 0.0128 | 21.0 | 15750 | 0.7251 | 0.9017 | | 0.0003 | 22.0 | 16500 | 0.7324 | 0.8933 | | 0.0024 | 23.0 | 17250 | 0.7123 | 0.9017 | | 0.0015 | 24.0 | 18000 | 0.6502 | 0.9133 | | 0.0109 | 25.0 | 18750 | 0.6676 | 0.9117 | | 0.0004 | 26.0 | 19500 | 0.6984 | 0.9033 | | 0.0105 | 27.0 | 20250 | 0.8181 | 0.8967 | | 0.0029 | 28.0 | 21000 | 0.7764 | 0.9 | | 0.0304 | 29.0 | 21750 | 0.7986 | 0.8967 | | 0.008 | 30.0 | 22500 | 0.8233 | 0.895 | | 0.0008 | 31.0 | 23250 | 0.8494 | 0.9033 | | 0.0 | 32.0 | 24000 | 0.8041 | 0.91 | | 0.0 | 33.0 | 24750 | 0.8842 | 0.9167 | | 0.0 | 34.0 | 25500 | 0.7437 | 0.9233 | | 0.0 | 35.0 | 26250 | 0.7405 | 0.925 | | 0.0 | 36.0 | 27000 | 0.7962 | 0.9083 | | 0.0059 | 37.0 | 27750 | 0.7867 | 0.9233 | | 0.0 | 38.0 | 28500 | 0.8151 | 0.92 | | 0.0 | 39.0 | 29250 | 0.8010 | 0.91 | | 0.0 | 40.0 | 30000 | 0.8483 | 0.9133 | | 0.0 | 41.0 | 30750 | 0.8225 | 0.9167 | | 0.0 | 42.0 | 31500 | 0.8207 | 0.9167 | | 0.0 | 43.0 | 32250 | 0.8290 | 0.915 | | 0.0 | 44.0 | 33000 | 0.8408 | 0.915 | | 0.0 | 45.0 | 33750 | 0.8374 | 0.9183 | | 0.0 | 46.0 | 34500 | 0.8446 | 0.9167 | | 0.0 | 47.0 | 35250 | 0.8518 | 0.915 | | 0.0 | 48.0 | 36000 | 0.8526 | 0.915 | | 0.0 | 49.0 | 36750 | 0.8568 | 0.9167 | | 0.0 | 50.0 | 37500 | 0.8586 | 0.915 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
colinsss/testModel22
colinsss
2023-12-20T08:55:31Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2023-12-20T08:53:22Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
intellya22/test-model-sr
intellya22
2023-12-20T08:52:48Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-20T08:47:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 119 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 29, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 179, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
TheBloke/Noromaid-13B-v0.2-GPTQ
TheBloke
2023-12-20T08:51:29Z
23
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:NeverSleep/Noromaid-13b-v0.2", "base_model:quantized:NeverSleep/Noromaid-13b-v0.2", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-12-20T08:06:46Z
--- base_model: NeverSleep/Noromaid-13b-v0.2 inference: false license: cc-by-nc-4.0 model_creator: IkariDev and Undi model_name: Noromaid 13B V0.2 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Noromaid 13B V0.2 - GPTQ - Model creator: [IkariDev and Undi](https://huggingface.co/NeverSleep) - Original model: [Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- description start --> # Description This repo contains GPTQ model files for [IkariDev and Undi's Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF) * [IkariDev and Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev and Undi's Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2). <!-- licensing end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Noromaid-13B-v0.2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Noromaid-13B-v0.2-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Noromaid-13B-v0.2-GPTQ`: ```shell mkdir Noromaid-13B-v0.2-GPTQ huggingface-cli download TheBloke/Noromaid-13B-v0.2-GPTQ --local-dir Noromaid-13B-v0.2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Noromaid-13B-v0.2-GPTQ huggingface-cli download TheBloke/Noromaid-13B-v0.2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Noromaid-13B-v0.2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Noromaid-13B-v0.2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Noromaid-13B-v0.2-GPTQ --local-dir Noromaid-13B-v0.2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Noromaid-13B-v0.2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Noromaid-13B-v0.2-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Noromaid-13B-v0.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Noromaid-13B-v0.2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Noromaid-13B-v0.2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: IkariDev and Undi's Noromaid 13B V0.2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***VERY EXPERIMENTAL*** version, don't expect everything to work!!! ## If you don't like this model, use [Noromaid 0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)** Expect that many things will change in the next version!! --- Mergemonster and a new dataset were used. If you want a 7b, or 20b hit us up in the Community tab! # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Test model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Noromaid-13b-v0.2. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! - [Another private Aesir dataset] ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
ding-diri-ding-dong/bart-large
ding-diri-ding-dong
2023-12-20T08:51:09Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-12-19T08:32:24Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: bart-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2870 - Bleu: 44.0002 - Gen Len: 12.6144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
m-a-p/Kun-LabelModel
m-a-p
2023-12-20T08:46:41Z
9
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-04T08:05:36Z
--- language: - zh --- # COIG-Kun Label Model ## Model Details - **Name:** COIG-Kun Label Model - **Release Date:** 2023.12.04 - **Github URL:** [COIG-Kun](https://github.com/Zheng0428/COIG-Kun) - **Developers:** Tianyu Zheng*, Shuyue Guo*, Xingwei Qu, Xinrun Du, Wenhu Chen, Jie Fu, Wenhao Huang, Ge Zhang ## Model Description The Label Model is a part of the Kun project, which aims to enhance language model training through a novel data augmentation paradigm, leveraging principles of self-alignment and instruction backtranslation. The model is specifically fine-tuned to generate high-quality instructional data, a critical component in the project's approach to data augmentation and language model training. ## Intended Use - **Primary Use:** The Label Model is designed for generating instructional data to fine-tune language models. - **Target Users:** Researchers and developers in NLP and ML, particularly those working on language model training and data augmentation. ## Training Data The Label Model is trained using approximately ten thousand high-quality seed instructions. These instructions were meticulously curated to ensure the effectiveness of the training process and to produce high-quality outputs for use as instructional data. ## Training Process - **Base Model:** Yi-34B - **Epochs:** 6 - **Learning Rate:** 1e-5 - **Fine-Tuning Method:** The model was fine-tuned on high-quality seed instructions, with the responses to these instructions used as outputs and the instructions themselves as inputs. ## Evaluation The Label Model was evaluated on its ability to generate high-quality instructional data, focusing on the relevancy, clarity, and usability of the instructions for language model training. ## Ethical Considerations - Users should be aware of potential biases in the training data, which could be reflected in the model's outputs. - The model should not be used for generating harmful or misleading content. ## Citing the Model To cite the Label Model in academic work, please use the following reference: ```bibtex @misc{COIG-Kun, title={Kun: Answer Polishment Saves Your Time for Using Intruction Backtranslation on Self-Alignment}, author={Tianyu, Zheng* and Shuyue, Guo* and Xingwei, Qu and Xinrun, Du and Wenhu, Chen and Jie, Fu and Wenhao, Huang and Ge, Zhang}, year={2023}, publisher={GitHub}, journal={GitHub repository}, howpublished={https://github.com/Zheng0428/COIG-Kun} } ```
facebook-llama/custom_code
facebook-llama
2023-12-20T08:37:26Z
4
0
transformers
[ "transformers", "pytorch", "safetensors", "glm", "thudm", "zh", "arxiv:2103.10360", "endpoints_compatible", "region:us" ]
null
2023-12-18T15:04:08Z
--- language: - zh tags: - glm - thudm --- GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks. Please refer to our paper for a detailed description of GLM: [GLM: General Language Model Pretraining with Autoregressive Blank Infilling](https://arxiv.org/abs/2103.10360) (ACL 2022) Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution) Find more examples in our [Github repo](https://github.com/THUDM/GLM). ## Model description `glm-large-chinese` is pretrained on the [WuDaoCorpora](https://www.sciencedirect.com/science/article/pii/S2666651021000152) dataset. It has 24 transformer layers, with hidden size 1024 and 16 attention heads in each layer. The model is pretrained with autoregressive blank filling objectives designed for natural language understanding, seq2seq, and language modeling. ## How to use Please refer the [instruction](https://github.com/THUDM/GLM#hugging-face-hub) in our Github repo. We use three different mask tokens for different tasks: `[MASK]` for short blank filling, `[sMASK]` for sentence filling, and `[gMASK]` for left to right generation. You can find examples about different masks from [here](https://github.com/THUDM/GLM#left-to-right-generation--blank-filling-interactive). The prediction always begin with a special `<|startofpiece|>` token and ends with a `<|endofpiece|>` token. ## Citation Please cite our paper if you find this code useful for your research: ``` @article{DBLP:conf/acl/DuQLDQY022, author = {Zhengxiao Du and Yujie Qian and Xiao Liu and Ming Ding and Jiezhong Qiu and Zhilin Yang and Jie Tang}, title = {{GLM:} General Language Model Pretraining with Autoregressive Blank Infilling}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2022, Dublin, Ireland, May 22-27, 2022}, pages = {320--335}, publisher = {Association for Computational Linguistics}, year = {2022}, } ```
wuwx/ppo-LunarLander-v2-from-scratch
wuwx
2023-12-20T08:31:51Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T08:15:16Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -175.81 +/- 97.61 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'wuwx/ppo-LunarLander-v2-from-scratch' 'batch_size': 512 'minibatch_size': 128} ```
jonastim/Reinforce-carpole
jonastim
2023-12-20T08:21:45Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T08:21:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-carpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 136.30 +/- 7.79 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
amaye15/Stack-Overflow-Zero-Shot-Classification
amaye15
2023-12-20T08:20:05Z
18
2
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2023-11-25T11:38:24Z
--- license: mit language: - en pipeline_tag: zero-shot-classification --- # Automatic Stack Overflow Question Classifier ## Important All credit goes to huggingface user [MoritzLaurer](https://huggingface.co/MoritzLaurer) as his model is the basis for this project. ## Introduction The Automatic Stack Overflow Question Classifier harnesses the latest advancements in artificial intelligence to systematically categorize questions on Stack Overflow. Its primary goal is to streamline the process of sorting queries, enhancing navigability, and improving the overall user experience on the platform. ## About the Project This initiative takes advantage of the DeBERTa V3 model's capabilities in zero-shot classification. By doing so, it aims to revolutionize how questions are organized on Stack Overflow. Instead of relying on manual categorization, which can be time-consuming and inconsistent, this project introduces an automated, AI-driven approach for more precise and efficient question sorting. ## Code and Repository Access the complete source code and project details on GitHub: [Stack Overflow Question Classifier Repository](https://github.com/amaye15/stackoverflow-question-classifier). ## Streamlit App Access our live classifier [here](https://stack-overflow-question-classifier.streamlit.app/). This interactive web application demonstrates the model's capabilities in real-time. ## Model Learn more about the DeBERTa V3 model and its adaptation for this project on Hugging Face: [DeBERTa V3 on Hugging Face](https://huggingface.co/amaye15/Stack-Overflow-Zero-Shot-Classification). ## Dataset The dataset, curated specifically for this project, can be found here: [Stack Overflow Zero-Shot Classification Dataset](https://huggingface.co/datasets/amaye15/Stack-Overflow-Zero-Shot-Classification). It encompasses a wide range of Stack Overflow questions, providing a comprehensive base for model training and testing.
TheBloke/Frostwind-10.7B-v1-GGUF
TheBloke
2023-12-20T08:18:33Z
130
4
transformers
[ "transformers", "gguf", "solar", "en", "base_model:Sao10K/Frostwind-10.7B-v1", "base_model:quantized:Sao10K/Frostwind-10.7B-v1", "license:cc-by-nc-4.0", "region:us" ]
null
2023-12-20T08:13:21Z
--- base_model: Sao10K/Frostwind-10.7B-v1 inference: false language: - en license: cc-by-nc-4.0 model_creator: Saofiq model_name: Frostwind 10.7B V1 model_type: solar prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Frostwind 10.7B V1 - GGUF - Model creator: [Saofiq](https://huggingface.co/Sao10K) - Original model: [Frostwind 10.7B V1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [Saofiq's Frostwind 10.7B V1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF) * [Saofiq's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Frostwind-10.7B-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [frostwind-10.7b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes | | [frostwind-10.7b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 4.67 GB| 7.17 GB | very small, high quality loss | | [frostwind-10.7b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss | | [frostwind-10.7b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss | | [frostwind-10.7b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [frostwind-10.7b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss | | [frostwind-10.7b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended | | [frostwind-10.7b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [frostwind-10.7b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended | | [frostwind-10.7b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended | | [frostwind-10.7b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss | | [frostwind-10.7b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/Frostwind-10.7B-v1-GGUF/blob/main/frostwind-10.7b-v1.Q8_0.gguf) | Q8_0 | 8 | 11.41 GB| 13.91 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Frostwind-10.7B-v1-GGUF and below it, a specific filename to download, such as: frostwind-10.7b-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Frostwind-10.7B-v1-GGUF frostwind-10.7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Frostwind-10.7B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Frostwind-10.7B-v1-GGUF frostwind-10.7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m frostwind-10.7b-v1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./frostwind-10.7b-v1.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./frostwind-10.7b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Saofiq's Frostwind 10.7B V1 Frostwind-v1 ![Frost1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1/resolve/main/frost1.png) A finetune of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <br>Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples. Dataset Composition: <br>20% - Coding <br>30% - Instruct <br>30% - Generalised Data <br>10% - Roleplay <br>10% - Dealignment *** Testing Notes: Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this. Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end. For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests. Fairly uncensored *during roleplay.* Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here. *** Trained with Alpaca Format: ``` ### Instruction: <Prompt> ### Response: ``` OR ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ``` *** <br>wandb: <br>wandb: Run history: <br>wandb: eval/loss █▃▂▂▂▂▂▁▁▁▁▂▂▂▂▂▂▁▁▁ <br>wandb: eval/runtime ▃▂▃▂▃▂▂▃▁▃█▂▃▃▃▂▃▃▂▂ <br>wandb: eval/samples_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: eval/steps_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/learning_rate ▄███████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁▁ <br>wandb: train/loss █▅▅▆▅▅▄▄▄▆▆▅▆▆▆▅▄▆▅▅▅▆▄▄▃▄▃▃▂▃▄▂▂▃▃▂▁▂▂▂ <br>wandb: <br>wandb: Run summary: <br>wandb: eval/loss 0.74622 <br>wandb: eval/runtime 72.5049 <br>wandb: eval/samples_per_second 37.239 <br>wandb: eval/steps_per_second 2.331 <br>wandb: train/epoch 1.98 <br>wandb: train/global_step 410 <br>wandb: train/learning_rate 0.0 <br>wandb: train/loss 0.6457 <br>wandb: train/total_flos 3.4382652340646707e+18 <br>wandb: train/train_loss 0.70204 <br>wandb: train/train_runtime 10880.917 <br>wandb: train/train_samples_per_second 9.417 <br>wandb: train/train_steps_per_second 0.038 <br>wandb: <!-- original-model-card end -->
ikkbor/ChatGLM-6B-ASGEN
ikkbor
2023-12-20T08:17:36Z
1
0
transformers
[ "transformers", "chatglm", "custom_code", "aa", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-20T08:06:01Z
--- license: apache-2.0 language: - aa ---
TheBloke/Noromaid-13B-v0.2-GGUF
TheBloke
2023-12-20T08:13:54Z
298
3
transformers
[ "transformers", "gguf", "llama", "base_model:NeverSleep/Noromaid-13b-v0.2", "base_model:quantized:NeverSleep/Noromaid-13b-v0.2", "license:cc-by-nc-4.0", "region:us" ]
null
2023-12-20T08:06:46Z
--- base_model: NeverSleep/Noromaid-13b-v0.2 inference: false license: cc-by-nc-4.0 model_creator: IkariDev and Undi model_name: Noromaid 13B V0.2 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Noromaid 13B V0.2 - GGUF - Model creator: [IkariDev and Undi](https://huggingface.co/NeverSleep) - Original model: [Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- description start --> ## Description This repo contains GGUF format model files for [IkariDev and Undi's Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF) * [IkariDev and Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev and Undi's Noromaid 13B V0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [noromaid-13b-v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [noromaid-13b-v0.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [noromaid-13b-v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [noromaid-13b-v0.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [noromaid-13b-v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [noromaid-13b-v0.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss | | [noromaid-13b-v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [noromaid-13b-v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [noromaid-13b-v0.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [noromaid-13b-v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [noromaid-13b-v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [noromaid-13b-v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.2-GGUF/blob/main/noromaid-13b-v0.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Noromaid-13B-v0.2-GGUF and below it, a specific filename to download, such as: noromaid-13b-v0.2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Noromaid-13B-v0.2-GGUF noromaid-13b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Noromaid-13B-v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Noromaid-13B-v0.2-GGUF noromaid-13b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m noromaid-13b-v0.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./noromaid-13b-v0.2.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./noromaid-13b-v0.2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: IkariDev and Undi's Noromaid 13B V0.2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***VERY EXPERIMENTAL*** version, don't expect everything to work!!! ## If you don't like this model, use [Noromaid 0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)** Expect that many things will change in the next version!! --- Mergemonster and a new dataset were used. If you want a 7b, or 20b hit us up in the Community tab! # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Test model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Noromaid-13b-v0.2. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! - [Another private Aesir dataset] ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek <!-- original-model-card end -->
jawsbar/testModel22
jawsbar
2023-12-20T08:13:09Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2023-12-20T08:13:03Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
hzjane/bloomz-7b1-gguf
hzjane
2023-12-20T08:00:50Z
106
1
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2023-12-20T06:13:38Z
Upload bloomz-7b1-q4_0.gguf, converted from [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1/tree/main) by llama.cpp's bash [convert-hf-to-gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert-hf-to-gguf.py). ```bash python convert-hf-to-gguf.py --outfile bloomz-7b1.gguf --outtype f16 /mnt/disk1/models/bloomz-7b1/ ./build/bin/quantize bloomz-7b1.gguf bloomz-7b1-q4_0.gguf q4_0 ```
Shamik/whisper-small-dv
Shamik
2023-12-20T07:49:00Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dv", "dataset:mozilla-foundation/common_voice_15_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-20T05:22:46Z
--- language: - dv license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_15_0 metrics: - wer model-index: - name: Whisper Small finetuned on Dhivehi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 15 type: mozilla-foundation/common_voice_15_0 config: dv split: train args: dv metrics: - name: Wer type: wer value: 12.543073742246728 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small finetuned on Dhivehi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 15 dataset. It achieves the following results on the evaluation set: - Loss: 0.1542 - Wer Ortho: 59.6270 - Wer: 12.5431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0803 | 3.2 | 400 | 0.1542 | 59.6270 | 12.5431 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.15.0 - Tokenizers 0.15.0
Kekega/flan-t5-base-dialogsum_v2
Kekega
2023-12-20T07:42:53Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-20T07:39:37Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base-dialogsum_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-dialogsum_v2 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9323 - Rouge1: 49.6194 - Rouge2: 23.9441 - Rougel: 47.1784 - Rougelsum: 47.7351 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.8419 | 1.0 | 1558 | 0.9686 | 49.1717 | 21.6321 | 46.5027 | 46.9099 | 19.0 | | 0.7931 | 2.0 | 3116 | 0.9393 | 49.0429 | 25.016 | 46.8583 | 47.1894 | 19.0 | | 0.7931 | 3.0 | 4674 | 0.9555 | 49.969 | 24.7039 | 47.6287 | 48.1426 | 19.0 | | 0.7572 | 4.0 | 6232 | 0.9746 | 49.1309 | 23.2537 | 46.4771 | 46.9249 | 19.0 | | 0.7417 | 5.0 | 7790 | 0.9820 | 49.2093 | 23.1137 | 46.6788 | 47.1939 | 19.0 | | 0.7491 | 6.0 | 9348 | 0.9592 | 50.4655 | 25.5968 | 48.0925 | 48.6231 | 19.0 | | 0.7181 | 7.0 | 10906 | 0.9593 | 50.6565 | 24.5502 | 48.0572 | 48.5985 | 19.0 | | 0.7144 | 8.0 | 12464 | 0.9386 | 50.9351 | 25.9342 | 48.4183 | 48.898 | 19.0 | | 0.7137 | 9.0 | 14022 | 0.9486 | 49.5404 | 23.7464 | 46.9305 | 47.4827 | 19.0 | | 0.6728 | 10.0 | 15580 | 0.9419 | 49.9498 | 23.8617 | 47.3421 | 47.9081 | 19.0 | | 0.6783 | 11.0 | 17138 | 0.9459 | 50.52 | 25.9799 | 48.0356 | 48.596 | 19.0 | | 0.6708 | 12.0 | 18696 | 0.9552 | 49.85 | 23.8992 | 47.3564 | 47.8535 | 19.0 | | 0.6515 | 13.0 | 20254 | 0.9462 | 49.7239 | 25.468 | 47.4924 | 48.0146 | 19.0 | | 0.6419 | 14.0 | 21812 | 0.9507 | 49.8791 | 25.3529 | 47.5795 | 48.0845 | 19.0 | | 0.6297 | 15.0 | 23370 | 0.9323 | 49.6194 | 23.9441 | 47.1784 | 47.7351 | 19.0 | | 0.6354 | 16.0 | 24928 | 0.9408 | 50.3383 | 25.6301 | 48.1241 | 48.6641 | 19.0 | | 0.6178 | 17.0 | 26486 | 0.9420 | 49.5266 | 24.6134 | 47.1452 | 47.6481 | 19.0 | | 0.608 | 18.0 | 28044 | 0.9549 | 49.7907 | 24.9381 | 47.4622 | 48.0312 | 19.0 | | 0.6205 | 19.0 | 29602 | 0.9580 | 49.687 | 25.1737 | 47.4014 | 47.8466 | 19.0 | | 0.5946 | 20.0 | 31160 | 0.9519 | 50.1159 | 25.3671 | 47.8232 | 48.3411 | 19.0 | | 0.6107 | 21.0 | 32718 | 0.9549 | 49.9508 | 24.746 | 47.5774 | 48.1047 | 19.0 | | 0.591 | 22.0 | 34276 | 0.9543 | 50.1561 | 25.3451 | 47.8341 | 48.3492 | 19.0 | | 0.6017 | 23.0 | 35834 | 0.9570 | 49.893 | 25.1383 | 47.503 | 48.059 | 19.0 | | 0.5942 | 24.0 | 37392 | 0.9584 | 49.9479 | 24.9422 | 47.5917 | 48.1012 | 19.0 | | 0.5826 | 25.0 | 38950 | 0.9592 | 49.9049 | 25.1102 | 47.5736 | 48.1165 | 19.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
dcipheranalytics/gpt-sw3-6.7b-v2-instruct-fp16
dcipheranalytics
2023-12-20T07:39:24Z
48
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "da", "sv", "no", "en", "is", "dataset:laion/OIG", "dataset:databricks/databricks-dolly-15k", "dataset:OpenAssistant/oasst1", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T06:58:49Z
--- license: other datasets: - laion/OIG - databricks/databricks-dolly-15k - OpenAssistant/oasst1 language: - da - sv - 'no' - en - is pipeline_tag: conversational --- # Model description [AI Sweden](https://huggingface.co/AI-Sweden-Models/) **Base models** [GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/) [GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/) [GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/) **Instruct models** [GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/) [GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/) **Quantized models** [GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq) GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. The `instruct` models were finetrained on instruction data using both chat and raw text formats. # Intended use GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. # Limitations Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content. # How to use To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information. The following code snippet loads our tokenizer & model, and uses the GPU if available. ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Initialize Variables model_name = "AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct" device = "cuda:0" if torch.cuda.is_available() else "cpu" prompt = "Träd är fina för att" # Initialize Tokenizer & Model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval() model.to(device) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` The chat format used during data-preprocessing takes the form: ``` <|endoftext|><s> User: Jag tycker träd är fina <s> Bot: Kul att du tycker det! <s> ... ``` The procedure to generate text is the same as before: ```python prompt = """ <|endoftext|><s> User: Varför är träd fina? <s> Bot: """.strip() input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you: ```python generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device) generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"] ``` # Compliance The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material. # GPT-SW3 Model Card Following Mitchell et al. (2018), we provide a model card for GPT-SW3. # Model Details - Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. - Model date: GPT-SW3 date of release 2022-12-20 - Model version: This is the second generation of GPT-SW3. - Model type: GPT-SW3 is a large decoder-only transformer language model. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation. - Paper or other resource for more information: N/A. - License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/blob/main/LICENSE). - Where to send questions or comments about the model: nlu@ai.se # Intended Use - Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not. - Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. - Out-of-scope use cases: See the modified RAIL license. # Data, Limitations, and Recommendations - Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model. - Data selection for evaluation: N/A - Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs. - Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. - We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general. # GPT-SW3 Datasheet - We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3. # Motivation - For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages. - Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE. - Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949. - Any other comments? No. # Composition - What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources: - Books - Litteraturbanken (https://litteraturbanken.se/) - The Pile - Articles - Diva (https://www.diva-portal.org/) - The Pile: PubMed - The Pile: ArXiv - Code - Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code) - Conversational - Familjeliv (https://www.familjeliv.se/) - Flashback (https://flashback.se/) - Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI) - Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - Math - English Math dataset generated with code from DeepMind (D. Saxton et al., 2019) - Swedish Math dataset, generated as above with manually translated templates - Miscellaneous - Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf) - OPUS, the open parallel corpus (https://opus.nlpl.eu/) - Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database) - Natural Instructions (https://github.com/allenai/natural-instructions) - P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3) - The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC) - Danish Gigaword (https://gigaword.dk/) - Icelandic Gigaword (https://clarin.is/en/resources/gigaword/) - The Pile: Stack Exchange - Web Common Crawl - Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se). - Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019) - Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019) - The Pile: Open Web Text - Web Sources - Various public Swedish website scrapes (see Appendix in data paper) - Familjeliv Articles - Public Swedish Job Ads from JobTech/Arbetsförmedlingen - Wikipedia - Official Wikipedia dumps - **Instruction data**: - [dolly](https://github.com/databrickslabs/dolly/tree/master/data) - [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md) - [OIG](https://laion.ai/blog/oig-dataset/) - Fass: Swedish pharmaceutical information, which was transformed into Q&A format. - How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens. - Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources. - What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data. - Is there a label or target associated with each instance? If so, please provide a description. No. - Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No. - Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances. - Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus. - Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies. - Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. - Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. - Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc. - Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification. - Any other comments? No. # Collection Process - How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources. - What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet. - If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected. - Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines. - Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years. - Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset. - Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes. - Any other comments? No. - Preprocessing/cleaning/labeling - Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021). **Instruction data**: The processing outlined above was not applied to the instruction data. Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form. The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic. - Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations. - Any other comments? No. # Uses - Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models. - Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A. - What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks. - Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population. - Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of. - Any other comments? No. # Distribution - Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No. - How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A. - When will the dataset be distributed? N/A. - Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A. - Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A. - Any other comments? No. # Maintenance - Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB. - How can the owner/curator/manager of the dataset be contacted (e.g., email address)? nlu@ai.se - Is there an erratum? If so, please provide a link or other access point. N/A. - Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset. - If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu). - Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A. - If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time. - Any other comments? No. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-6.7b-v2-instruct) | Metric | Value | |-----------------------|---------------------------| | Avg. | 39.57 | | ARC (25-shot) | 40.78 | | HellaSwag (10-shot) | 67.77 | | MMLU (5-shot) | 31.57 | | TruthfulQA (0-shot) | 40.32 | | Winogrande (5-shot) | 63.54 | | GSM8K (5-shot) | 6.37 | | DROP (3-shot) | 26.67 |
ntc-ai/SDXL-LoRA-slider.Mistletoe-Mercenaries
ntc-ai
2023-12-20T07:37:25Z
17
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2023-12-20T07:37:22Z
--- language: - en thumbnail: "images/evaluate/Mistletoe Mercenaries.../Mistletoe Mercenaries_17_3.0.png" widget: - text: Mistletoe Mercenaries output: url: images/Mistletoe Mercenaries_17_3.0.png - text: Mistletoe Mercenaries output: url: images/Mistletoe Mercenaries_19_3.0.png - text: Mistletoe Mercenaries output: url: images/Mistletoe Mercenaries_20_3.0.png - text: Mistletoe Mercenaries output: url: images/Mistletoe Mercenaries_21_3.0.png - text: Mistletoe Mercenaries output: url: images/Mistletoe Mercenaries_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "Mistletoe Mercenaries" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - Mistletoe Mercenaries (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/Mistletoe Mercenaries_17_-3.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_17_0.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_17_3.0.png" width=256 height=256 /> | | <img src="images/Mistletoe Mercenaries_19_-3.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_19_0.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_19_3.0.png" width=256 height=256 /> | | <img src="images/Mistletoe Mercenaries_20_-3.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_20_0.0.png" width=256 height=256 /> | <img src="images/Mistletoe Mercenaries_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` Mistletoe Mercenaries ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.Mistletoe-Mercenaries', weight_name='Mistletoe Mercenaries.safetensors', adapter_name="Mistletoe Mercenaries") # Activate the LoRA pipe.set_adapters(["Mistletoe Mercenaries"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, Mistletoe Mercenaries" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
ejunprung/SASB-ESG-Sentence-Classifier
ejunprung
2023-12-20T07:32:22Z
7
1
tf-keras
[ "tf-keras", "ESG", "text-classification", "en", "doi:10.57967/hf/1505", "license:apache-2.0", "region:us" ]
text-classification
2023-12-19T01:20:04Z
--- license: apache-2.0 language: - en pipeline_tag: text-classification tags: - ESG --- # SASB ESG Sentence Classifier (Stage 1) The SASB ESG sentence classifier is a BERT-based model fine-tuned to separate ESG from non-ESG sentences. It was trained using data extracted from documents conforming to the Sustainability Accounting Standards Board (SASB) standards. For a full description of our training data, please refer to https://www.kaggle.com/datasets/edwardjunprung/sasb-aligned-esg-sentences. Our classifier consists of a two-stage pipeline: 1. **[Stage 1](https://huggingface.co/ejunprung/SASB-ESG-Sentence-Classifier)** - Classify sentences as ESG or not. 2. **[Stage 2](https://huggingface.co/ejunprung/SASB-ESG-Classification-26Categories)** - Subsequently, bucket ESG sentences into one of [26 SASB categories](https://sasb.org/standards/materiality-finder/). ## Goal The objective is to categorize sentences within ESG documents in order to evaluate corporate ESG alignment. As an illustration, upon analyzing all sentences in Activision's annual ESG report, the SASB ESG model determined that more than 40% of sentences correspond with the Diversity & Inclusion and Human Rights SASB categories. Consequently, we can infer that Activision places a significant emphasis on these initiatives, which positions it as a potential candidate for investment funds with social impact mandates. ## Model Output SASB ESG sentence classifier outputs either 0 (i.e. Not ESG) or 1 (i.e. ESG). ## Results Below, we present a comparison between our two-stage approach and a baseline heuristic method. The baseline method categorizes ESG sentences based solely on the presence of specific keywords. For instance, any sentence containing the phrase "human rights" would be automatically labeled under that category. | Model | Parent ESG Category | Child ESG Category | |----------------|:-------------------:|:------------------:| | Heuristic | 31% | 34% | | SASB ESG Model | 71% | 61% | **Parent Category** = Environment, Social Capital, Human Capital, Business Model & Innovation, Leadership & Governance<br> **Child Category** = GHG Emissions, Air Quality, etc. Please visit https://sasb.org/standards/materiality-finder to see full list. ## Misc - Developed by: [Victor Chen](https://www.linkedin.com/in/victorzitianchen), [Jude Zhu](https://www.linkedin.com/in/judewzhu), [Michael Liston](https://www.linkedin.com/in/michael-c-liston/), [Edward Junprung](https://www.linkedin.com/in/ejunprung/) - Parent Model: [bert_en_cased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) - Blog Post: https://www.gopeaks.org/esg-mapper
mesolitica/malaysian-mistral-7b-32k-instructions-v2
mesolitica
2023-12-20T07:24:01Z
16
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ms", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-12T11:42:38Z
--- language: - ms --- # Full Parameter Finetuning 7B 32768 context length Mistral on Malaysian instructions dataset README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#instructions-7b-16384-context-length We use exact Mistral Instruct chat template. WandB, https://wandb.ai/mesolitica/fpf-mistral-7b-hf-instructions-16k?workspace=user-husein-mesolitica WandB report, https://wandb.ai/mesolitica/fpf-mallam-5b-instructions-16k/reports/Instruction-finetuning--Vmlldzo2MjE5Njg2 ## Dataset Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2 Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-partial-ultrachat-v2.ipynb ## Limitations This model is a quick demonstration that the base model can be easily fine-tuned to achieve some performance. It does have minimal moderation mechanisms. ## how-to ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch import json def parse_mistral_chat(messages, function_call = None): user_query = messages[-1]['content'] users, assistants = [], [] for q in messages[:-1]: if q['role'] == 'user': users.append(q['content']) elif q['role'] == 'assistant': assistants.append(q['content']) texts = ['<s>'] if function_call: fs = [] for f in function_call: f = json.dumps(f, indent=4) fs.append(f) fs = '\n\n'.join(fs) texts.append(f'\n[FUNCTIONCALL]\n{fs}\n') for u, a in zip(users, assistants): texts.append(f'[INST] {u.strip()} [/INST] {a.strip()}</s>') texts.append(f'[INST] {user_query.strip()} [/INST]') prompt = ''.join(texts).strip() return prompt TORCH_DTYPE = 'bfloat16' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE) ) tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-7b-32k-instructions-v2') model = AutoModelForCausalLM.from_pretrained( 'mesolitica/malaysian-mistral-7b-32k-instructions-v2', use_flash_attention_2 = True, quantization_config = nf4_config ) messages = [ {'role': 'user', 'content': 'kwsp tu apa'} ] prompt = parse_mistral_chat(messages) inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, ) r = model.generate(**generate_kwargs) tokenizer.decode(r[0]) ``` ```text <s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s> ```
Latinos/TH3BULL
Latinos
2023-12-20T07:23:25Z
0
1
adapter-transformers
[ "adapter-transformers", "art", "image-to-video", "es", "dataset:HuggingFaceH4/no_robots", "license:apache-2.0", "region:us" ]
image-to-video
2023-12-08T23:52:26Z
--- license: apache-2.0 datasets: - HuggingFaceH4/no_robots language: - es metrics: - character library_name: adapter-transformers pipeline_tag: image-to-video tags: - art ---
LoneStriker/Yi-34B-200K-AEZAKMI-v2-6.0bpw-h6-exl2
LoneStriker
2023-12-20T07:10:06Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:adamo1139/AEZAKMI_v2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T05:53:39Z
--- license: other license_name: yi-license license_link: LICENSE datasets: - adamo1139/AEZAKMI_v2 --- ## Model description Yi-34B 200K base model fine-tuned on AEZAKMI v2 dataset. Training took around 25 hours on single local RTX 3090 Ti. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! \ Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Cost of this fine-tune is about $10 in electricity. It took me 3 tries to get it right. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh. I had to lower max_positional_embeddings in config.json and model_max_length for training to start, otherwise I was OOMing straight away. My first attempt had max_positional_embeddings set to 16384 and model_max_length set to 200000. This allowed fine-tuning to finish, but that model was broken after applying LoRA and merging it. \ This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. ## Prompt Format I recommend using ChatML format, as this was used during fine-tune. \ Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. ``` <|im_start|>system A chat with uncensored assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Intended uses & limitations Use is limited by Yi license. ## Known Issues I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had good experience running this model with temperature 1.2. Stories have ChatGPT like paragraph spacing, I will work on this in the future maybe, not a high priority Unrestricted-ness of the v2 isn't quite something that I am happy with yet, especially using prompt "A chat.". \ With a slightly modifed prompt it works much better, I recommend using "A chat with uncensored assistant.", it's stupid but it helps a lot. \ Base Yi-34B model is contaminated with refusals and this contaminates all models trained on Yi-34B \ My next project is to attempt to de-contaminate base Yi-34B 4K and Yi-34B 200K using DPO with preferred data coming from uncontaminated raw models. I plan to release that dataset openly. I was made aware of the frequent occurrence of the phrase "sending shivers down a spine" in the generations during RP of v1, so I fixed those samples - it should be better now. \ I can hold up to 24000 ctx with 4.65bpw exl2 version and 8-bit cache - long context should work as good as other models trained on 200k version of Yi-34B \ There is also some issue with handling long system messages for RP, I was planning to investigate it for v2 but I didn't. ## Axolotl training parameters - bnb_4bit_use_double_quant: true - is_llama_derived_model: true - load_in_4bit: true - adapter: qlora - sequence_len: 1400 - sample_packing: true - lora_r: 16 - lora_alpha: 32 - lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj - lora_target_linear: true - pad_to_sequence_len: false - micro_batch_size: 1 - gradient_accumulation_steps: 1 - num_epochs: 2.4 - optimizer: adamw_bnb_8bit - lr_scheduler: constant - learning_rate: 0.00005 - train_on_inputs: false - group_by_length: false - bf16: true - bfloat16: true - flash_optimum: false - gradient_checkpointing: true - flash_attention: true - seed: 42 ## Upcoming I will probably be working on de-contaminating base Yi-34B model now. \ My second run of AEZAKMI v2 fine-tune was just 0.15 epochs and I really like how natural this model is and how rich is it's vocabulary. I will try to train less to hit the sweetspot. \ I will be uploading LoRA adapter for that second run that was just 0.15 epochs. \ I believe that I might have gotten what I want if I would have stopped training sooner. I don't have checkpoints older than 1500 steps back so I would need to re-run training to get it back.
LoneStriker/Yi-34B-200K-AEZAKMI-v2-5.0bpw-h6-exl2
LoneStriker
2023-12-20T07:09:40Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:adamo1139/AEZAKMI_v2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T05:31:14Z
--- license: other license_name: yi-license license_link: LICENSE datasets: - adamo1139/AEZAKMI_v2 --- ## Model description Yi-34B 200K base model fine-tuned on AEZAKMI v2 dataset. Training took around 25 hours on single local RTX 3090 Ti. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! \ Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Cost of this fine-tune is about $10 in electricity. It took me 3 tries to get it right. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh. I had to lower max_positional_embeddings in config.json and model_max_length for training to start, otherwise I was OOMing straight away. My first attempt had max_positional_embeddings set to 16384 and model_max_length set to 200000. This allowed fine-tuning to finish, but that model was broken after applying LoRA and merging it. \ This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. ## Prompt Format I recommend using ChatML format, as this was used during fine-tune. \ Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. ``` <|im_start|>system A chat with uncensored assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Intended uses & limitations Use is limited by Yi license. ## Known Issues I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had good experience running this model with temperature 1.2. Stories have ChatGPT like paragraph spacing, I will work on this in the future maybe, not a high priority Unrestricted-ness of the v2 isn't quite something that I am happy with yet, especially using prompt "A chat.". \ With a slightly modifed prompt it works much better, I recommend using "A chat with uncensored assistant.", it's stupid but it helps a lot. \ Base Yi-34B model is contaminated with refusals and this contaminates all models trained on Yi-34B \ My next project is to attempt to de-contaminate base Yi-34B 4K and Yi-34B 200K using DPO with preferred data coming from uncontaminated raw models. I plan to release that dataset openly. I was made aware of the frequent occurrence of the phrase "sending shivers down a spine" in the generations during RP of v1, so I fixed those samples - it should be better now. \ I can hold up to 24000 ctx with 4.65bpw exl2 version and 8-bit cache - long context should work as good as other models trained on 200k version of Yi-34B \ There is also some issue with handling long system messages for RP, I was planning to investigate it for v2 but I didn't. ## Axolotl training parameters - bnb_4bit_use_double_quant: true - is_llama_derived_model: true - load_in_4bit: true - adapter: qlora - sequence_len: 1400 - sample_packing: true - lora_r: 16 - lora_alpha: 32 - lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj - lora_target_linear: true - pad_to_sequence_len: false - micro_batch_size: 1 - gradient_accumulation_steps: 1 - num_epochs: 2.4 - optimizer: adamw_bnb_8bit - lr_scheduler: constant - learning_rate: 0.00005 - train_on_inputs: false - group_by_length: false - bf16: true - bfloat16: true - flash_optimum: false - gradient_checkpointing: true - flash_attention: true - seed: 42 ## Upcoming I will probably be working on de-contaminating base Yi-34B model now. \ My second run of AEZAKMI v2 fine-tune was just 0.15 epochs and I really like how natural this model is and how rich is it's vocabulary. I will try to train less to hit the sweetspot. \ I will be uploading LoRA adapter for that second run that was just 0.15 epochs. \ I believe that I might have gotten what I want if I would have stopped training sooner. I don't have checkpoints older than 1500 steps back so I would need to re-run training to get it back.
say0naRa/merak-7b-borobudur-question-v1-lora-4bit
say0naRa
2023-12-20T07:02:49Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Ichsan2895/Merak-7B-v4", "base_model:adapter:Ichsan2895/Merak-7B-v4", "region:us" ]
null
2023-12-18T06:24:17Z
--- library_name: peft base_model: Ichsan2895/Merak-7B-v4 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
FounderOfHuggingface/gpt2_gen_lora_r16_ag_news_t200_e5
FounderOfHuggingface
2023-12-20T07:01:26Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-12-20T07:01:23Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t18_e75
FounderOfHuggingface
2023-12-20T06:57:30Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-12-20T06:57:28Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
hkivancoral/smids_10x_deit_tiny_adamax_001_fold4
hkivancoral
2023-12-20T06:53:02Z
3
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-20T04:47:53Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_10x_deit_tiny_adamax_001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8716666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5843 - Accuracy: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3385 | 1.0 | 750 | 0.3848 | 0.84 | | 0.2692 | 2.0 | 1500 | 0.3830 | 0.8633 | | 0.2345 | 3.0 | 2250 | 0.4255 | 0.8617 | | 0.1851 | 4.0 | 3000 | 0.4988 | 0.8517 | | 0.1806 | 5.0 | 3750 | 0.5032 | 0.8433 | | 0.1568 | 6.0 | 4500 | 0.5429 | 0.8633 | | 0.0638 | 7.0 | 5250 | 0.6033 | 0.855 | | 0.1397 | 8.0 | 6000 | 0.6990 | 0.845 | | 0.1208 | 9.0 | 6750 | 0.6852 | 0.8483 | | 0.0667 | 10.0 | 7500 | 0.8743 | 0.8383 | | 0.0482 | 11.0 | 8250 | 0.7516 | 0.8667 | | 0.0306 | 12.0 | 9000 | 0.8187 | 0.8783 | | 0.0125 | 13.0 | 9750 | 0.8525 | 0.86 | | 0.0512 | 14.0 | 10500 | 1.0441 | 0.8483 | | 0.0023 | 15.0 | 11250 | 1.0562 | 0.85 | | 0.0353 | 16.0 | 12000 | 1.1914 | 0.8583 | | 0.0637 | 17.0 | 12750 | 1.1115 | 0.8667 | | 0.025 | 18.0 | 13500 | 1.1677 | 0.865 | | 0.0126 | 19.0 | 14250 | 1.0523 | 0.8833 | | 0.0 | 20.0 | 15000 | 1.0935 | 0.8633 | | 0.0359 | 21.0 | 15750 | 1.1791 | 0.8733 | | 0.0003 | 22.0 | 16500 | 1.0630 | 0.87 | | 0.0003 | 23.0 | 17250 | 1.0996 | 0.8667 | | 0.0006 | 24.0 | 18000 | 1.0915 | 0.8817 | | 0.0001 | 25.0 | 18750 | 1.1484 | 0.8617 | | 0.0 | 26.0 | 19500 | 1.1656 | 0.875 | | 0.0179 | 27.0 | 20250 | 1.2101 | 0.8717 | | 0.0 | 28.0 | 21000 | 1.3179 | 0.86 | | 0.0 | 29.0 | 21750 | 1.2425 | 0.8733 | | 0.0 | 30.0 | 22500 | 1.3660 | 0.87 | | 0.0 | 31.0 | 23250 | 1.3781 | 0.87 | | 0.0 | 32.0 | 24000 | 1.4541 | 0.86 | | 0.0003 | 33.0 | 24750 | 1.3447 | 0.8717 | | 0.0 | 34.0 | 25500 | 1.3846 | 0.8633 | | 0.0 | 35.0 | 26250 | 1.3907 | 0.8733 | | 0.0 | 36.0 | 27000 | 1.4240 | 0.87 | | 0.0 | 37.0 | 27750 | 1.3878 | 0.8717 | | 0.0 | 38.0 | 28500 | 1.4082 | 0.87 | | 0.0 | 39.0 | 29250 | 1.4530 | 0.8717 | | 0.0 | 40.0 | 30000 | 1.4653 | 0.8717 | | 0.0 | 41.0 | 30750 | 1.4878 | 0.87 | | 0.0 | 42.0 | 31500 | 1.5011 | 0.8717 | | 0.0 | 43.0 | 32250 | 1.5107 | 0.8717 | | 0.0 | 44.0 | 33000 | 1.5209 | 0.8717 | | 0.0 | 45.0 | 33750 | 1.5429 | 0.8717 | | 0.0 | 46.0 | 34500 | 1.5577 | 0.8717 | | 0.0 | 47.0 | 35250 | 1.5684 | 0.8717 | | 0.0 | 48.0 | 36000 | 1.5772 | 0.8717 | | 0.0 | 49.0 | 36750 | 1.5824 | 0.8717 | | 0.0 | 50.0 | 37500 | 1.5843 | 0.8717 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
diwakar21scout/sql-code-llama
diwakar21scout
2023-12-20T06:47:54Z
2
0
transformers
[ "transformers", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "base_model:finetune:codellama/CodeLlama-7b-hf", "license:llama2", "endpoints_compatible", "region:us" ]
null
2023-12-19T15:29:16Z
--- license: llama2 base_model: codellama/CodeLlama-7b-hf tags: - generated_from_trainer model-index: - name: sql-code-llama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sql-code-llama This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.033 | 0.28 | 20 | 1.9418 | | 1.3136 | 0.56 | 40 | 0.8477 | | 0.1674 | 0.83 | 60 | 0.1384 | | 0.1276 | 1.11 | 80 | 0.1220 | | 0.1106 | 1.39 | 100 | 0.1046 | | 0.102 | 1.67 | 120 | 0.0946 | | 0.0917 | 1.94 | 140 | 0.0903 | | 0.0895 | 2.22 | 160 | 0.0887 | | 0.0889 | 2.5 | 180 | 0.0872 | | 0.0874 | 2.78 | 200 | 0.0858 | | 0.086 | 3.06 | 220 | 0.0851 | | 0.0861 | 3.33 | 240 | 0.0842 | | 0.085 | 3.61 | 260 | 0.0835 | | 0.0821 | 3.89 | 280 | 0.0830 | | 0.0838 | 4.17 | 300 | 0.0823 | | 0.0816 | 4.44 | 320 | 0.0820 | | 0.0785 | 4.72 | 340 | 0.0815 | | 0.0819 | 5.0 | 360 | 0.0812 | | 0.081 | 5.28 | 380 | 0.0810 | | 0.0765 | 5.56 | 400 | 0.0810 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.15.0
amithapa/ppo-LunarLander-v2
amithapa
2023-12-20T06:08:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-10T10:42:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 234.01 +/- 21.53 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
shi-labs/vcoder_llava-v1.5-13b
shi-labs
2023-12-20T05:59:29Z
24
4
transformers
[ "transformers", "pytorch", "vcoder_llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-10T18:00:35Z
--- license: apache-2.0 --- # VCoder LLaVA-1.5-13b VCoder LLaVA-1.5-13b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder). VCoder is an adapter for improving existing Multimodal LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks. ![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg) ### Citation ```bibtex @article{jain2023vcoder, title={{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}}, author={Jitesh Jain and Jianwei Yang and Humphrey Shi}, journal={arXiv}, year={2023} } ```
shi-labs/vcoder_llava-v1.5-7b
shi-labs
2023-12-20T05:59:13Z
21
2
transformers
[ "transformers", "pytorch", "vcoder_llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-10T01:56:27Z
--- license: apache-2.0 --- # VCoder LLaVA-1.5-7b VCoder LLaVA-1.5-7b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder). VCoder is an adapter for improving existing Multimodal LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks. ![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg) ### Citation ```bibtex @article{jain2023vcoder, title={{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}}, author={Jitesh Jain and Jianwei Yang and Humphrey Shi}, journal={arXiv}, year={2023} } ```
LoneStriker/SOLAR-10.7B-Instruct-v1.0-uncensored-3.0bpw-h6-exl2-2
LoneStriker
2023-12-20T05:54:35Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-19T16:26:40Z
--- license: apache-2.0 --- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) finetuned on [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
Hitenshu/my-panda
Hitenshu
2023-12-20T05:54:01Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-20T05:49:53Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-panda- Dreambooth model trained by Hitenshu following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 22EMICS025 Sample pictures of this concept: ![0](https://huggingface.co/Hitenshu/my-panda/resolve/main/sample_images/xzg1.jpg) ![1](https://huggingface.co/Hitenshu/my-panda/resolve/main/sample_images/xzg2.jpg)
Hyun24/bert-finetuned-ner
Hyun24
2023-12-20T05:43:08Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-20T03:09:15Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0648 - eval_precision: 0.9048 - eval_recall: 0.9355 - eval_f1: 0.9199 - eval_accuracy: 0.9824 - eval_runtime: 413.8202 - eval_samples_per_second: 7.854 - eval_steps_per_second: 0.984 - epoch: 1.0 - step: 1756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
justinwangx/vicuna-robust2-sft-lora
justinwangx
2023-12-20T05:33:07Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "region:us" ]
null
2023-12-20T05:31:41Z
--- tags: - generated_from_trainer model-index: - name: vicuna-robust2-sft-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vicuna-robust2-sft-lora This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 128 - total_train_batch_size: 2048 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0 | 0 | 1.8516 | | No log | 0 | 0 | 1.8678 | | No log | 0 | 0 | 1.9414 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.14.6 - Tokenizers 0.14.1
katielink/llm_finetuning_test-0
katielink
2023-12-20T05:32:46Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "region:us" ]
text-generation
2023-12-20T05:32:40Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
dev-cuai/ppo-LunarLander-v2
dev-cuai
2023-12-20T05:29:19Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T02:34:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.40 +/- 19.752047868963977 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python import gymnasium as gym from stable_baselines3 import PPO from stable_baselines3.ppo.policies import MlpPolicy from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.logger import configure from stable_baselines3.common.monitor import Monitor from stable_baselines3.common.callbacks import BaseCallback from lib import score from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.env_util import make_vec_env from huggingface_sb3 import package_to_hub class MyCallback(BaseCallback): def __init__(self, verbose=0): super(MyCallback, self).__init__(verbose) def _on_step(self) -> bool: current_step = self.num_timesteps if(current_step == 90_000): self.model.learning_rate = 0.001 elif(current_step == 130_000): self.model.learning_rate = 0.0003 return True v_last_max_score = 150 v_target_score = 249 def training () : global v_last_max_score, v_target_score v_is_succeed = False v_total_steps = 1_000_000 gym_name = "LunarLander-v2" model_name = "ppo-LunarLander-v2" # env_1 = env = Monitor(gym.make(gym_name, render_mode="rgb_array"), 'test_monitor') ppo_config = { 'policy': 'MlpPolicy', 'env': env # , 'learning_rate' : learning_rate_schedule #0.0003 , 'learning_rate' : 0.01 , 'n_steps' :1024 #2048 , 'batch_size' :512 #64 , 'n_epochs' :20 #10 , 'gamma' :0.999 #0.99 , 'gae_lambda' :0.95 , 'clip_range' :0.2 #0.15 , 'clip_range_vf' :None , 'normalize_advantage' :True , 'ent_coef' : 0.000021 #0.0 , 'vf_coef' :0.5 , 'max_grad_norm' :0.5 , 'use_sde' :False , 'sde_sample_freq' :4 #-1 , 'rollout_buffer_class' :None , 'rollout_buffer_kwargs' :None , 'target_kl' :None , 'stats_window_size' :100 , 'tensorboard_log' :None , 'policy_kwargs' : None , 'verbose' :1 , 'seed' :None , 'device' :'auto' , '_init_setup_model' :True } model = PPO(**ppo_config) tmp_path = "/home/ubuntu/log/" new_logger = configure(tmp_path, ["stdout"]) model.set_logger(new_logger) callback = MyCallback() model.learn(total_timesteps=v_total_steps, callback=callback) mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=10, deterministic=True) new_logger.log(f"mean_reward={mean_reward:.2f} +/- {std_reward}") #save for report v_score = mean_reward - std_reward if v_score>v_last_max_score: v_last_max_score = v_score model.save(model_name) score.record_video(gym_name, model, video_length=500, prefix="ppo-lunar") #upload upload_env_id = "LunarLander-v2" upload_model_architecture = "PPO" upload_repo_id = "dev-cuai/ppo-LunarLander-v2" upload_commit_message = "Upload PPO LunarLander-v2 trained agent" package_to_hub( model=model, # Our trained model model_name=model_name, # The name of our trained model model_architecture=upload_model_architecture, # The model architecture we used: in our case PPO env_id=upload_env_id, # Name of the environment eval_env=env, # Evaluation Environment repo_id=upload_repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 commit_message=upload_commit_message, ) v_is_succeed = v_score > v_target_score else : print('Too Low Mean_reward({}<{}), model.learning_rate:{}'.format(v_score, v_last_max_score, model.learning_rate)) return v_is_succeed v_loop_cnt = 0 while(True): v_loop_cnt += 1 for i in range(3): print("OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO") print("run loop {} times. last score was {}".format(v_loop_cnt, v_last_max_score)) v_ret = training () if(v_ret): break ... ```
LoneStriker/Frostwind-10.7B-v1-8.0bpw-h8-exl2-2
LoneStriker
2023-12-20T05:20:12Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T04:59:59Z
--- license: cc-by-nc-4.0 language: - en --- Frostwind-v1 ![Frost1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1/resolve/main/frost1.png) A finetune of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <br>Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples. Dataset Composition: <br>20% - Coding <br>30% - Instruct <br>30% - Generalised Data <br>10% - Roleplay <br>10% - Dealignment *** Testing Notes: Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this. Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end. For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests. Fairly uncensored *during roleplay.* Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here. *** Trained with Alpaca Format: ``` ### Instruction: <Prompt> ### Response: ``` OR ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ``` *** <br>wandb: <br>wandb: Run history: <br>wandb: eval/loss █▃▂▂▂▂▂▁▁▁▁▂▂▂▂▂▂▁▁▁ <br>wandb: eval/runtime ▃▂▃▂▃▂▂▃▁▃█▂▃▃▃▂▃▃▂▂ <br>wandb: eval/samples_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: eval/steps_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/learning_rate ▄███████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁▁ <br>wandb: train/loss █▅▅▆▅▅▄▄▄▆▆▅▆▆▆▅▄▆▅▅▅▆▄▄▃▄▃▃▂▃▄▂▂▃▃▂▁▂▂▂ <br>wandb: <br>wandb: Run summary: <br>wandb: eval/loss 0.74622 <br>wandb: eval/runtime 72.5049 <br>wandb: eval/samples_per_second 37.239 <br>wandb: eval/steps_per_second 2.331 <br>wandb: train/epoch 1.98 <br>wandb: train/global_step 410 <br>wandb: train/learning_rate 0.0 <br>wandb: train/loss 0.6457 <br>wandb: train/total_flos 3.4382652340646707e+18 <br>wandb: train/train_loss 0.70204 <br>wandb: train/train_runtime 10880.917 <br>wandb: train/train_samples_per_second 9.417 <br>wandb: train/train_steps_per_second 0.038 <br>wandb:
LoneStriker/Frostwind-10.7B-v1-6.0bpw-h6-exl2-2
LoneStriker
2023-12-20T05:20:05Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T04:49:02Z
--- license: cc-by-nc-4.0 language: - en --- Frostwind-v1 ![Frost1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1/resolve/main/frost1.png) A finetune of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <br>Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples. Dataset Composition: <br>20% - Coding <br>30% - Instruct <br>30% - Generalised Data <br>10% - Roleplay <br>10% - Dealignment *** Testing Notes: Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this. Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end. For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests. Fairly uncensored *during roleplay.* Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here. *** Trained with Alpaca Format: ``` ### Instruction: <Prompt> ### Response: ``` OR ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ``` *** <br>wandb: <br>wandb: Run history: <br>wandb: eval/loss █▃▂▂▂▂▂▁▁▁▁▂▂▂▂▂▂▁▁▁ <br>wandb: eval/runtime ▃▂▃▂▃▂▂▃▁▃█▂▃▃▃▂▃▃▂▂ <br>wandb: eval/samples_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: eval/steps_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/learning_rate ▄███████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁▁ <br>wandb: train/loss █▅▅▆▅▅▄▄▄▆▆▅▆▆▆▅▄▆▅▅▅▆▄▄▃▄▃▃▂▃▄▂▂▃▃▂▁▂▂▂ <br>wandb: <br>wandb: Run summary: <br>wandb: eval/loss 0.74622 <br>wandb: eval/runtime 72.5049 <br>wandb: eval/samples_per_second 37.239 <br>wandb: eval/steps_per_second 2.331 <br>wandb: train/epoch 1.98 <br>wandb: train/global_step 410 <br>wandb: train/learning_rate 0.0 <br>wandb: train/loss 0.6457 <br>wandb: train/total_flos 3.4382652340646707e+18 <br>wandb: train/train_loss 0.70204 <br>wandb: train/train_runtime 10880.917 <br>wandb: train/train_samples_per_second 9.417 <br>wandb: train/train_steps_per_second 0.038 <br>wandb:
LoneStriker/Frostwind-10.7B-v1-3.0bpw-h6-exl2-2
LoneStriker
2023-12-20T05:19:42Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T04:17:35Z
--- license: cc-by-nc-4.0 language: - en --- Frostwind-v1 ![Frost1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1/resolve/main/frost1.png) A finetune of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <br>Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples. Dataset Composition: <br>20% - Coding <br>30% - Instruct <br>30% - Generalised Data <br>10% - Roleplay <br>10% - Dealignment *** Testing Notes: Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this. Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end. For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests. Fairly uncensored *during roleplay.* Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here. *** Trained with Alpaca Format: ``` ### Instruction: <Prompt> ### Response: ``` OR ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ``` *** <br>wandb: <br>wandb: Run history: <br>wandb: eval/loss █▃▂▂▂▂▂▁▁▁▁▂▂▂▂▂▂▁▁▁ <br>wandb: eval/runtime ▃▂▃▂▃▂▂▃▁▃█▂▃▃▃▂▃▃▂▂ <br>wandb: eval/samples_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: eval/steps_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇ <br>wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ <br>wandb: train/learning_rate ▄███████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁▁ <br>wandb: train/loss █▅▅▆▅▅▄▄▄▆▆▅▆▆▆▅▄▆▅▅▅▆▄▄▃▄▃▃▂▃▄▂▂▃▃▂▁▂▂▂ <br>wandb: <br>wandb: Run summary: <br>wandb: eval/loss 0.74622 <br>wandb: eval/runtime 72.5049 <br>wandb: eval/samples_per_second 37.239 <br>wandb: eval/steps_per_second 2.331 <br>wandb: train/epoch 1.98 <br>wandb: train/global_step 410 <br>wandb: train/learning_rate 0.0 <br>wandb: train/loss 0.6457 <br>wandb: train/total_flos 3.4382652340646707e+18 <br>wandb: train/train_loss 0.70204 <br>wandb: train/train_runtime 10880.917 <br>wandb: train/train_samples_per_second 9.417 <br>wandb: train/train_steps_per_second 0.038 <br>wandb:
AbhishekkV19/bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep
AbhishekkV19
2023-12-20T05:13:57Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-19T16:47:19Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0013 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2633 | 1.0 | 607 | 0.0025 | 1.0 | | 0.2317 | 2.0 | 1214 | 0.0119 | 1.0 | | 0.1965 | 3.0 | 1821 | 0.0057 | 1.0 | | 0.1648 | 4.0 | 2428 | 0.0031 | 1.0 | | 0.097 | 5.0 | 3035 | 0.0022 | 1.0 | | 0.0862 | 6.0 | 3642 | 0.0016 | 1.0 | | 0.0604 | 7.0 | 4249 | 0.0013 | 1.0 | | 0.0515 | 8.0 | 4856 | 0.0016 | 1.0 | | 0.0547 | 9.0 | 5463 | 0.0011 | 1.0 | | 0.0414 | 10.0 | 6070 | 0.0013 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Shreyas45/Llama2_Text-to-SQL_Fintuned
Shreyas45
2023-12-20T05:00:37Z
0
0
peft
[ "peft", "region:us" ]
null
2023-12-19T15:04:32Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 # Inference Code ### Install required libraries ```python !pip install transformers peft ``` ### Login ```python from huggingface_hub import login token = "Your Key" login(token) ``` #### Import necessary modules ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer import torch from transformers import BitsAndBytesConfig from peft import prepare_model_for_kbit_training ``` #### Load PEFT model and configuration ```python config = PeftConfig.from_pretrained("Shreyas45/Llama2_Text-to-SQL_Fintuned") peft_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf") peft_model = PeftModel.from_pretrained(peft_model, "Shreyas45/Llama2_Text-to-SQL_Fintuned") ``` ### Load trained model and tokenizer ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import prepare_model_for_kbit_training trained_model_tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, trust_remote_code=True) trained_model_tokenizer.pad_token = trained_model_tokenizer.eos_token ``` ### Define a SQL query ```python query = '''In the table named management with columns (department_id VARCHAR, temporary_acting VARCHAR); CREATE TABLE department (name VARCHAR, num_employees VARCHAR, department_id VARCHAR), Show the name and number of employees for the departments managed by heads whose temporary acting value is 'Yes'?''' ``` ### Construct prompt ```python prompt = f'''### Instruction: Below is an instruction that describes a task and the schema of the table in the database. Write a response that generates a request in the form of a SQL query. Here the schema of the table is mentioned first followed by the question for which the query needs to be generated. And the question is: {query} ###Output: ''' ``` ### Tokenize the prompt ```python encodings = trained_model_tokenizer(prompt, return_tensors='pt') ``` #### Configure generation parameters ```python generation_config = peft_model.generation_config generation_config.max_new_token = 1024 generation_config.temperature = 0.7 generation_config.top_p = 0.7 generation_config.num_return_sequence = 1 generation_config.pad_token_id = trained_model_tokenizer.pad_token_id generation_config.eos_token_id = trained_model_tokenizer.eos_token_id ``` ### Generate SQL query using the model ```python with torch.inference_mode(): outputs = peft_model.generate( input_ids=encodings.input_ids, attention_mask=encodings.attention_mask, generation_config=generation_config, max_new_tokens=100 ) ``` ### Decode and print the generated SQL query ```python generated_query = trained_model_tokenizer.decode(outputs[0]) print("Generated SQL Query:") print(generated_query) ```
ahmedabdelwahed/SFT-base-4-epochs
ahmedabdelwahed
2023-12-20T04:49:32Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/mt5-base", "base_model:adapter:google/mt5-base", "region:us" ]
null
2023-12-19T21:55:46Z
--- library_name: peft base_model: google/mt5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
appvoid/palmer-002-ultra
appvoid
2023-12-20T04:44:19Z
6
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:appvoid/no-prompt-50k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-16T22:15:50Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation datasets: - appvoid/no-prompt-50k --- ![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/palmer.jpeg) # palmer ### a better base model palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks. ### evaluation |Model| ARC_C| HellaSwag| PIQA| Winogrande| |------|-----|-----------|------|-------------| |tinyllama-2t| 0.2807| 0.5463| 0.7067| 0.5683| |palmer-001| 0.2807| 0.5524| 0.7106| 0.5896| |tinyllama-2.5t|0.3191|0.5896| 0.7307| 0.5872| |palmer-002|0.3242|**0.5956**|**0.7345**|0.5888| |palmer-002-ultra|**0.3319**| 0.5877 |0.7252|**0.6038**| This is a continuation on `palmer-x-002`. As of now, this is the best overall model. ### training Training took ~7.5 P100 gpu hours. It was trained on 50,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible. ### prompt ``` no prompt ``` <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>
ntc-ai/SDXL-LoRA-slider.low-poly-count
ntc-ai
2023-12-20T04:37:13Z
7
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2023-12-20T04:37:09Z
--- language: - en thumbnail: "images/evaluate/low poly count.../low poly count_17_3.0.png" widget: - text: low poly count output: url: images/low poly count_17_3.0.png - text: low poly count output: url: images/low poly count_19_3.0.png - text: low poly count output: url: images/low poly count_20_3.0.png - text: low poly count output: url: images/low poly count_21_3.0.png - text: low poly count output: url: images/low poly count_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "low poly count" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - low poly count (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/low poly count_17_-3.0.png" width=256 height=256 /> | <img src="images/low poly count_17_0.0.png" width=256 height=256 /> | <img src="images/low poly count_17_3.0.png" width=256 height=256 /> | | <img src="images/low poly count_19_-3.0.png" width=256 height=256 /> | <img src="images/low poly count_19_0.0.png" width=256 height=256 /> | <img src="images/low poly count_19_3.0.png" width=256 height=256 /> | | <img src="images/low poly count_20_-3.0.png" width=256 height=256 /> | <img src="images/low poly count_20_0.0.png" width=256 height=256 /> | <img src="images/low poly count_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` low poly count ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.low-poly-count', weight_name='low poly count.safetensors', adapter_name="low poly count") # Activate the LoRA pipe.set_adapters(["low poly count"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, low poly count" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
mkoven/ppo-Huggy
mkoven
2023-12-20T04:23:09Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-12-20T04:23:03Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mkoven/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
multimodalart/poli-standard-face-repeats-2
multimodalart
2023-12-20T04:21:50Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T04:21:45Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-standard-face-repeats-2 <Gallery /> ## Model description ### These are multimodalart/poli-standard-face-repeats-2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-standard-face-repeats-2', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-standard-face-repeats-2', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-standard-face-repeats-2/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-standard-face-repeats-2/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-standard-face-repeats-2/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/poli-standard-face
multimodalart
2023-12-20T04:18:35Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T04:18:31Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-standard-face <Gallery /> ## Model description ### These are multimodalart/poli-standard-face LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-standard-face', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-standard-face', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-standard-face/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-standard-face/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-standard-face/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/poli-standard-face-rank-64
multimodalart
2023-12-20T04:17:22Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T04:17:16Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-standard-face-rank-64 <Gallery /> ## Model description ### These are multimodalart/poli-standard-face-rank-64 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-standard-face-rank-64', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-standard-face-rank-64', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-standard-face-rank-64/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-standard-face-rank-64/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-standard-face-rank-64/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
somehumanperson1/phi2pytorch
somehumanperson1
2023-12-20T04:15:53Z
12
0
transformers
[ "transformers", "pytorch", "phi-msft", "text-generation", "nlp", "code", "custom_code", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-12-20T04:02:13Z
--- inference: false license: other license_name: microsoft-research-license license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code --- ## Model Summary Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters. Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more. ## Intended Uses Phi-2 is intended for research purposes only. Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format. ### QA Format: You can provide the prompt as a standalone question as follows: ```markdown Write a detailed analogy between mathematics and a lighthouse. ``` where the model generates the text after "." . To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:" ```markdown Instruct: Write a detailed analogy between mathematics and a lighthouse. Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us. ``` where the model generates the text after "Output:". ### Chat Format: ```markdown Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions? Bob: Well, have you tried creating a study schedule and sticking to it? Alice: Yes, I have, but it doesn't seem to help much. Bob: Hmm, maybe you should try studying in a quiet environment, like the library. Alice: ... ``` where the model generates the text after the first "Bob:". ### Code Format: ```python def print_prime(n): """ Print all primes between 1 and n """ primes = [] for num in range(2, n+1): is_prime = True for i in range(2, int(math.sqrt(num))+1): if num % i == 0: is_prime = False break if is_prime: primes.append(num) print(primes) ``` where the model generates the text after the comments. **Notes:** * Phi-2 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications. * Direct adoption for production tasks is out of the scope of this research project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details. * If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects. ## Sample Code There are four types of execution mode: 1. FP16 / Flash-Attention / CUDA: ```python model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True) ``` 2. FP16 / CUDA: ```python model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", device_map="cuda", trust_remote_code=True) ``` 3. FP32 / CUDA: ```python model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True) ``` 4. FP32 / CPU: ```python model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True) ``` To ensure the maximum compatibility, we recommend using the second execution mode (FP16 / CUDA), as follows: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True) inputs = tokenizer('''def print_prime(n): """ Print all primes between 1 and n """''', return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` **Remark:** In the generation function, our model currently does not support beam search (`num_beams > 1`). Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings. ## Limitations of Phi-2 * Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions. * Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. * Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users. * Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response. * Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring trainig data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs. * Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining. * Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses. ## Training ### Model * Architecture: a Transformer-based model with next-word prediction objective * Context length: 2048 tokens * Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4. * Training tokens: 1.4T tokens * GPUs: 96xA100-80G * Training time: 14 days ### Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ### License The model is licensed under the [microsoft-research-license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Danjie/Chadgpt-gpt2-xl
Danjie
2023-12-20T04:06:38Z
0
0
peft
[ "peft", "safetensors", "en", "base_model:openai-community/gpt2-xl", "base_model:adapter:openai-community/gpt2-xl", "license:mit", "region:us" ]
null
2023-12-20T03:50:31Z
--- library_name: peft base_model: gpt2-xl license: mit language: - en --- # Chadgpt gpt2-xl ## Colab Example https://colab.research.google.com/drive/1a_mBaqtufEDfJXQVokrap7D7Gmv5gtCK?usp=sharing ## Install Prerequisite ```bash !pip install -q git+https://github.com/huggingface/peft.git !pip install transformers !pip install -U accelerate !pip install accelerate !pip install bitsandbytes # Instal bits and bytes for inference of the model ``` ## Download Model ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "danjie/Chadgpt-gpt2-xl" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) ``` ## Inference ```python def talk_with_llm(tweet: str) -> str: # Encode and move tensor into cuda if applicable. encoded_input = tokenizer(tweet, return_tensors='pt') encoded_input = {k: v.to("cuda") for k, v in encoded_input.items()} output = model.generate(**encoded_input, max_new_tokens=64) response = tokenizer.decode(output[0], skip_special_tokens=True) return response talk_with_llm("<User> Your sentence \n<Assistant>") ```
multimodalart/poli-multiplier-100-face-repeats-2
multimodalart
2023-12-20T04:05:01Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T04:04:53Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-multiplier-100-face-repeats-2 <Gallery /> ## Model description ### These are multimodalart/poli-multiplier-100-face-repeats-2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-multiplier-100-face-repeats-2', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-multiplier-100-face-repeats-2', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-multiplier-100-face-repeats-2/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-multiplier-100-face-repeats-2/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-multiplier-100-face-repeats-2/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
ZhiguangHan/mt5-small-task2-dataset3
ZhiguangHan
2023-12-20T04:00:54Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-09T06:25:52Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: mt5-small-task2-dataset3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-task2-dataset3 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3113 - Accuracy: 0.704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3153 | 1.0 | 250 | 0.7513 | 0.526 | | 0.8878 | 2.0 | 500 | 0.5870 | 0.576 | | 0.7191 | 3.0 | 750 | 0.5021 | 0.578 | | 0.5842 | 4.0 | 1000 | 0.4596 | 0.612 | | 0.5085 | 5.0 | 1250 | 0.4158 | 0.62 | | 0.4515 | 6.0 | 1500 | 0.3865 | 0.618 | | 0.4086 | 7.0 | 1750 | 0.3755 | 0.648 | | 0.3811 | 8.0 | 2000 | 0.3505 | 0.662 | | 0.3449 | 9.0 | 2250 | 0.3366 | 0.678 | | 0.3294 | 10.0 | 2500 | 0.3280 | 0.674 | | 0.3146 | 11.0 | 2750 | 0.3201 | 0.702 | | 0.305 | 12.0 | 3000 | 0.3146 | 0.69 | | 0.2972 | 13.0 | 3250 | 0.3130 | 0.702 | | 0.2819 | 14.0 | 3500 | 0.3106 | 0.696 | | 0.2828 | 15.0 | 3750 | 0.3113 | 0.704 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
multimodalart/poli-standard-face-repeats-2-no-prior
multimodalart
2023-12-20T03:57:51Z
9
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T03:57:47Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-standard-face-repeats-2-no-prior <Gallery /> ## Model description ### These are multimodalart/poli-standard-face-repeats-2-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-standard-face-repeats-2-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-standard-face-repeats-2-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-standard-face-repeats-2-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-standard-face-repeats-2-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-standard-face-repeats-2-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/poli-multiplier-100-face-no-prior
multimodalart
2023-12-20T03:44:02Z
11
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-20T03:43:55Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of <s0><s1> a man wearing headphones and a blue shirt output: url: image-0.png - text: A photo of <s0><s1> a bald man wearing glasses and a white t - shirt output: url: image-1.png - text: A photo of <s0><s1> a man with glasses and a beard smiles output: url: image-2.png - text: A photo of <s0><s1> a bald man with glasses and a colorful shirt output: url: image-3.png - text: A photo of <s0><s1> a man with glasses and a hat wearing an orange cap output: url: image-4.png - text: A photo of <s0><s1> a man wearing glasses and a yellow hat taking a selfie output: url: image-5.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poli-100-multiplier-face-no-prior <Gallery /> ## Model description ### These are multimodalart/poli-100-multiplier-face-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poli-100-multiplier-face-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poli-100-multiplier-face-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poli-100-multiplier-face-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poli-100-multiplier-face-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poli-100-multiplier-face-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
arnabdhar/bert-tiny-ontonotes
arnabdhar
2023-12-20T03:35:03Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "pytorch", "BertForTokenClassification", "named-entity-recognition", "roberta-base", "generated_from_trainer", "en", "dataset:tner/ontonotes5", "base_model:prajjwal1/bert-tiny", "base_model:finetune:prajjwal1/bert-tiny", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-20T03:07:46Z
--- language: - en license: mit base_model: prajjwal1/bert-tiny tags: - pytorch - BertForTokenClassification - named-entity-recognition - roberta-base - generated_from_trainer metrics: - recall - precision - f1 - accuracy model-index: - name: bert-tiny-ontonotes results: - task: type: token-classification metrics: - type: accuracy value: 0.9476 name: accuracy - type: precision value: 0.6817 name: precision - type: accuracy value: 0.7193 name: recall - type: accuracy value: 0.7 name: F1 datasets: - tner/ontonotes5 library_name: transformers pipeline_tag: token-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-tiny-ontonotes This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [tner/ontonotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset. It achieves the following results on the evaluation set: - Loss: 0.1917 - Recall: 0.7193 - Precision: 0.6817 - F1: 0.7000 - Accuracy: 0.9476 ## How to use the Model ### Using pipeline ```python from transformers import pipeline import torch # Initiate the pipeline device = 0 if torch.cuda.is_available() else 'cpu' ner = pipeline("token-classification", "arnabdhar/bert-tiny-ontonotes", device=device) # use the pipeline input_text = "My name is Clara and I live in Berkeley, California." results = ner(input_text) ``` ## Intended uses & limitations This model is fine-tuned for **Named Entity Recognition** task and you can use the model as it is or can use this model as a base model for further fine tuning on your custom dataset. The following entities were fine-tuned on: CARDINAL, DATE, PERSON, NORP, GPE, LAW, PERCENT, ORDINAL, MONEY, WORK_OF_ART, FAC, TIME, QUANTITY, PRODUCT, LANGUAGE, ORG, LOC, EVENT ## Training and evaluation data The dataset has 3 partitions, `train`, `validation` and `test`, all the 3 partitions were combined and then a 80:20 train-test split was made for finet uning process. The following `ID2LABEL` mapping was used. ```json { "0": "O", "1": "B-CARDINAL", "2": "B-DATE", "3": "I-DATE", "4": "B-PERSON", "5": "I-PERSON", "6": "B-NORP", "7": "B-GPE", "8": "I-GPE", "9": "B-LAW", "10": "I-LAW", "11": "B-ORG", "12": "I-ORG", "13": "B-PERCENT", "14": "I-PERCENT", "15": "B-ORDINAL", "16": "B-MONEY", "17": "I-MONEY", "18": "B-WORK_OF_ART", "19": "I-WORK_OF_ART", "20": "B-FAC", "21": "B-TIME", "22": "I-CARDINAL", "23": "B-LOC", "24": "B-QUANTITY", "25": "I-QUANTITY", "26": "I-NORP", "27": "I-LOC", "28": "B-PRODUCT", "29": "I-TIME", "30": "B-EVENT", "31": "I-EVENT", "32": "I-FAC", "33": "B-LANGUAGE", "34": "I-PRODUCT", "35": "I-ORDINAL", "36": "I-LANGUAGE" } ``` ## Training procedure The model was finetuned on Google Colab with a __NVIDIA T4__ GPU with 15GB of VRAM. It took around 5 minutes to fine tune and evaluate the model with 6000 steps of total training steps. For more details, you can look into the [Weights & Biases](https://wandb.ai/2wb2ndur/NER-ontonotes/runs/d93imv8j/overview?workspace=user-2wb2ndur) log history. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 160 - seed: 75241309 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 6000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Precision | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:| | 0.4283 | 0.31 | 600 | 0.3864 | 0.4561 | 0.4260 | 0.4405 | 0.9058 | | 0.3214 | 0.63 | 1200 | 0.2865 | 0.5865 | 0.5485 | 0.5669 | 0.9265 | | 0.2886 | 0.94 | 1800 | 0.2439 | 0.6432 | 0.6165 | 0.6295 | 0.9354 | | 0.2511 | 1.25 | 2400 | 0.2233 | 0.6765 | 0.6250 | 0.6497 | 0.9389 | | 0.2224 | 1.56 | 3000 | 0.2088 | 0.6878 | 0.6642 | 0.6758 | 0.9433 | | 0.2181 | 1.88 | 3600 | 0.2001 | 0.7105 | 0.6684 | 0.6888 | 0.9451 | | 0.215 | 2.19 | 4200 | 0.1954 | 0.7140 | 0.6795 | 0.6963 | 0.9469 | | 0.1907 | 2.5 | 4800 | 0.1934 | 0.7169 | 0.6776 | 0.6967 | 0.9470 | | 0.209 | 2.82 | 5400 | 0.1918 | 0.7185 | 0.6812 | 0.6994 | 0.9475 | | 0.2073 | 3.13 | 6000 | 0.1917 | 0.7193 | 0.6817 | 0.7000 | 0.9476 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
gautamtata/videomae-base-finetuned-kinetics-finetuned-lipsync-subset-1
gautamtata
2023-12-20T03:26:21Z
63
1
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-12-19T08:40:17Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-finetuned-lipsync-subset-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-lipsync-subset-1 This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5758 - Accuracy: 0.8107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 208 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6552 | 0.25 | 52 | 0.6489 | 0.5610 | | 0.4105 | 1.25 | 104 | 0.3532 | 0.8780 | | 0.5187 | 2.25 | 156 | 0.2864 | 0.9220 | | 0.2265 | 3.25 | 208 | 0.3047 | 0.9122 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/Mixtral-8x7B-Instruct-v0.1-5.0bpw-h6-exl2
LoneStriker
2023-12-20T03:21:16Z
6
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "fr", "it", "de", "es", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-19T22:28:47Z
--- license: apache-2.0 language: - fr - it - de - es - en inference: false --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
LoneStriker/Mixtral-8x7B-Instruct-v0.1-3.5bpw-h6-exl2
LoneStriker
2023-12-20T03:20:31Z
6
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "fr", "it", "de", "es", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-19T21:56:39Z
--- license: apache-2.0 language: - fr - it - de - es - en inference: false --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
LoneStriker/Mixtral-8x7B-Instruct-v0.1-2.4bpw-h6-exl2
LoneStriker
2023-12-20T03:12:13Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "fr", "it", "de", "es", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-19T21:24:55Z
--- license: apache-2.0 language: - fr - it - de - es - en inference: false --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
mikedad/dqn-SpaceInvadersNoFrameskip-v4
mikedad
2023-12-20T03:11:56Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-20T03:11:06Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 671.50 +/- 145.48 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mikedad -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mikedad -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mikedad ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```