modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 18:59:16
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
551 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 18:27:33
card
stringlengths
11
1.01M
gaelcharrier/ppo-Huggy
gaelcharrier
2024-01-17T08:14:10Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-17T08:14:05Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: gaelcharrier/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
pratham-saraf/ms7b-news-songify-sharded
pratham-saraf
2024-01-17T08:12:52Z
15
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-16T19:14:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
may-ohta/iwslt14_prompt
may-ohta
2024-01-17T08:09:23Z
5
0
JoeyNMT
[ "JoeyNMT", "Machine-translation", "en", "de", "fr", "multilingual", "dataset:may-ohta/iwslt14", "license:apache-2.0", "region:us" ]
null
2024-01-16T15:35:17Z
--- license: apache-2.0 library_name: JoeyNMT task: Machine-translation tags: - JoeyNMT - Machine-translation language: - en - de - fr - multilingual datasets: - may-ohta/iwslt14 metrics: - bleu --- # JoeyNMT: iwslt14 de-en-fr multilingual This is a JoeyNMT model for multilingual MT with language tags, built for a demo purpose. The model is trained on iwslt14 de-en / en-fr parallel data using DDP. Install [JoeyNMT](https://github.com/joeynmt/joeynmt) v2.3: ``` $ pip install git+https://github.com/joeynmt/joeynmt.git ``` ## Translation Torch hub interface: ```python import torch iwslt14 = torch.hub.load("joeynmt/joeynmt", "iwslt14_prompt") translation = iwslt14.translate( src=["Hello world!"], # src sentence src_prompt=["<en>"], # src language code trg_prompt=["<de>"], # trg language code beam_size=1, ) print(translation) # ["Hallo Welt!"] ``` (See [jupyter notebook](https://github.com/joeynmt/joeynmt/blob/main/notebooks/torchhub.ipynb) for details) ## Training ``` $ python -m joeynmt train iwslt14_prompt/config.yaml --use-ddp --skip-test ``` (See `train.log` for details) ## Evaluation ``` $ git clone https://huggingface.co/may-ohta/iwslt14_prompt $ python -m joeynmt test iwslt14_prompt/config.yaml --output-path iwslt14_prompt/hyp ``` direction | bleu --------- | :---- en->de | 28.88 de->en | 35.28 en->fr | 38.86 fr->en | 40.35 - beam_size: 5 - beam_alpha: 1.0 - sacrebleu signature `nrefs:1|case:lc|eff:no|tok:13a|smooth:exp|version:2.4.0` (See `test.log` for details) ## Data Format We downloaded IWSLT14 de-en and en-fr from [https://wit3.fbk.eu/2014-01](https://wit3.fbk.eu/2014-01) and created `{train|dev|test}.tsv` files in the following format: |src_prompt|src|trg_prompt|trg| |:---------|:--|:---------|:--| |`<en>`|Hello.|`<de>`|Hallo.| |`<de>`|Vielen Dank!|`<en>`|Thank you!| (See `test.ref.de-en.tsv`)
MaziyarPanahi/shisa-7b-v1-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T08:07:19Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "augmxnt/shisa-7b-v1", "ja", "en", "dataset:augmxnt/ultra-orca-boros-en-ja-v1", "dataset:Open-Orca/SlimOrca", "dataset:augmxnt/shisa-en-ja-dpo-v1", "arxiv:2310.05914", "arxiv:2305.18290", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational" ]
text-generation
2024-01-17T08:02:25Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - augmxnt/shisa-7b-v1 - transformers - safetensors - mistral - text-generation - ja - en - dataset:augmxnt/ultra-orca-boros-en-ja-v1 - dataset:Open-Orca/SlimOrca - dataset:augmxnt/shisa-en-ja-dpo-v1 - arxiv:2310.05914 - arxiv:2305.18290 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # shisa-7b-v1-Mistral-7B-Instruct-v0.1 shisa-7b-v1-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [augmxnt/shisa-7b-v1](https://huggingface.co/augmxnt/shisa-7b-v1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: augmxnt/shisa-7b-v1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/shisa-7b-v1-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
nullne/taxi
nullne
2024-01-17T08:06:26Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T08:06:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="nullne/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
nullne/q-FrozenLake-v1-4x4-noSlippery
nullne
2024-01-17T08:05:18Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T08:05:16Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nullne/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TanHanlin/q-FrozenLake-v1-4x4-noSlippery
TanHanlin
2024-01-17T07:56:37Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T07:56:34Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="TanHanlin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LoneStriker/Code-290k-13B-8.0bpw-h8-exl2
LoneStriker
2024-01-17T07:53:51Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T07:48:29Z
--- license: cc-by-nc-nd-4.0 datasets: - ajibawa-2023/Code-290k-ShareGPT language: - en tags: - code --- **Code-290k-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around **290000** set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. It is built upon using my existing Datasets [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT) and [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) . This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of the model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Will update soon.
Cegil/tinyllama2_finetuned_chatbot_hey
Cegil
2024-01-17T07:45:50Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-01-16T09:37:11Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: tinyllama2_finetuned_chatbot_hey results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama2_finetuned_chatbot_hey This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
LoneStriker/Code-290k-13B-6.0bpw-h6-exl2
LoneStriker
2024-01-17T07:44:36Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T07:40:30Z
--- license: cc-by-nc-nd-4.0 datasets: - ajibawa-2023/Code-290k-ShareGPT language: - en tags: - code --- **Code-290k-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around **290000** set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. It is built upon using my existing Datasets [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT) and [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) . This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of the model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Will update soon.
beibeif/pixel_flappycube_v1
beibeif
2024-01-17T07:41:50Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-16T16:00:44Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixel_flappycube_v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.80 +/- 14.03 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mesolitica/translation-t5-small-standard-bahasa-cased
mesolitica
2024-01-17T07:39:08Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "ms", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-04T17:01:07Z
--- language: - ms --- # Noisy Translation Small T5 Trained on 1536 context length, able to translate malay, pasar malay (social media texts or local context), english, manglish, javanese, banjarese and indonesian to target language. It also able to maintain the text structure as it is and only translate necessary texts, eg, programming code. Try it at https://huggingface.co/spaces/mesolitica/malaysian-translation ## how-to ```python from transformers import T5ForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'mesolitica/translation-t5-small-standard-bahasa-cased', use_fast=False ) model = T5ForConditionalGeneration.from_pretrained( 'mesolitica/translation-t5-small-standard-bahasa-cased' ) s = 'Hai, ada yang bisa saya bantu?' input_ids = tokenizer.encode(f'terjemah ke Melayu: {s}', return_tensors = 'pt') outputs = model.generate(input_ids, max_length = 100) all_special_ids = [0, 1, 2] outputs = [i for i in outputs[0] if i not in all_special_ids] print(tokenizer.decode(outputs, spaces_between_special_tokens = False)) ```
kijeong22/swin-finetuned
kijeong22
2024-01-17T07:38:37Z
14
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-04T14:40:10Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-finetuned This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6900 - Accuracy: 0.5407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.693 | 1.0 | 605 | 0.6910 | 0.5407 | | 0.6853 | 2.0 | 1211 | 0.6900 | 0.5407 | | 0.6875 | 3.0 | 1817 | 0.6903 | 0.5407 | | 0.6988 | 4.0 | 2420 | 0.6900 | 0.5407 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.1+cu116 - Datasets 2.16.1 - Tokenizers 0.15.0
HIT-SCIR/Chinese-Mixtral-8x7B-adapter
HIT-SCIR
2024-01-17T07:33:16Z
0
0
null
[ "safetensors", "arxiv:2401.04088", "arxiv:2109.07306", "license:apache-2.0", "region:us" ]
null
2024-01-15T06:10:26Z
--- license: apache-2.0 --- <div align="center"> <h1> Chinese-Mixtral-8x7B </h1> </div> ![](img/logo.png) <div align="center"> <a href="https://github.com/HIT-SCIR/Chinese-Mixtral-8x7B/pulls"> <image src="https://img.shields.io/badge/PRs-welcome-brightgreen"></image> <image src="https://img.shields.io/badge/License-Apache_2.0-green.svg"></image> </a> </div> ## 🚀 介绍 本项目基于Mistral发布的模型[Mixtral-8x7B](https://mistral.ai/news/mixtral-of-experts/)进行了中文扩词表增量预训练,希望进一步促进中文自然语言处理社区对MoE模型的研究。我们扩充后的词表显著提高了模型对中文的编解码效率,并通过大规模开源语料对扩词表模型进行增量预训练,使模型具备了强大的中文生成和理解能力。 项目开源内容: - 中文Mixtral-8x7B扩词表大模型 - 扩词表增量预训练代码 > 请注意,Chinese-Mixtral-8x7B仍然可能生成包含事实性错误的误导性回复或包含偏见/歧视的有害内容,请谨慎鉴别和使用生成的内容,请勿将生成的有害内容传播至互联网。 ## 📥 模型下载 本项目使用QLoRA进行训练,LoRA权重与合并权重后的模型分别开源,您可以根据自己的需求选择下载: | 模型名称 | 模型大小 | 下载地址 | 备注 | |:----------------------------:|:-----:|:-----------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:| | Chinese-Mixtral-8x7B | 88GB | [🤗HuggingFace](https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B) | 中文扩词表完整模型,可以直接使用 | | Chinese-Mixtral-8x7B-adapter | 2.7GB | [🤗HuggingFace](https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B-adapter) | LoRA权重,需要与原版Mixtral-8x7B进行合并才可以使用,合并脚本请参考[这里](https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930) | ## 💻 模型推理 Chinese-Mixtral-8x7B支持完整的Mixtral-8x7B模型生态,包括使用`vLLM`、`Flash Attention 2`进行加速,使用`bitsandbytes`进行模型量化等。以下是使用Chinese-Mixtral-8x7B进行推理的代码示例。 使用Flash Attention 2: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "HIT-SCIR/Chinese-Mixtral-8x7B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, device_map="auto") text = "我的名字是" inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` 使用4bit量化: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "HIT-SCIR/Chinese-Mixtral-8x7B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto") text = "我的名字是" inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` 请注意,Chinese-Mixtral-8x7B为基座模型,没有经过指令微调,因此指令遵循能力有限。您可以参考[微调](#微调)一节对模型进行微调。 ## 📈 模型性能 ### 模型综合能力 我们分别使用以下评测数据集对Chinese-Mixtral-8x7B进行评测: - C-Eval:一个全面的中文基础模型评估套件。它包含了13948个多项选择题,涵盖了52个不同的学科和四个难度级别。 - CMMLU:一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力,涵盖了从基础学科到高级专业水平的67个主题。 - MMLU:一个包含57个多选任务的英文评测数据集,涵盖了初等数学、美国历史、计算机科学、法律等,难度覆盖高中水平到专家水平,是目前主流的LLM评测数据集之一。 - HellaSwag:一个极具挑战的英文NLI评测数据集,每一个问题都需要对上下文进行深入理解,而不能基于常识进行回答。 根据Mistral发布的[技术报告](https://arxiv.org/pdf/2401.04088.pdf),Mixtral-8x7B在推理时将激活13B参数。下表为Chinese-Mixtral-8x7B与其他13B规模的中文扩词表模型在各个评测数据集上的5-shot结果: | 模型名称 | 增量训练语料 | C-Eval<br>(中文) | CMMLU<br>(中文) | MMLU<br>(英文) | HellaSwag<br>(英文) | |:-----------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:-------------:|:------------:|:-----------------:| | [IDEA-CCNL/Ziya2-13B-Base](https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base) | 650B Token | 59.29 | 60.93 | 59.86 | 58.90 | | [TigerResearch/tigerbot-13b-base-v3](https://huggingface.co/TigerResearch/tigerbot-13b-base-v3) | 500B Token | 50.52 | 51.65 | 53.46 | 59.16 | | [Linly-AI/Chinese-LLaMA-2-13B-hf](https://huggingface.co/Linly-AI/Chinese-LLaMA-2-13B-hf) | 11B Token | 42.57 | 41.95 | 51.32 | 59.05 | | [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b) | 约30B Token(120GB) | 41.90 | 42.08 | 51.92 | 59.28 | | **Chinese-Mixtral-8x7B(本项目)** | 42B Token | 52.08 | 51.08 | 69.80 | 65.69 | 在中文知识和理解方面,我们的Chinese-Mixtral-8x7B与TigerBot-13B-Base-v3性能相当。由于Chinese-Mixtral-8x7B的训练数据量仅为TigerBot-13B-Base-v3的8%,我们的模型仍有进一步提升的空间。与此同时,得益于原版Mixtral-8x7B模型强大的性能,我们的Chinese-Mixtral-8x7B达到了各个扩词表模型的最强英文水平。 > 由于不同版本的评测脚本实现细节有细微差异,为了保证评测结果的一致性和公平性,我们的评测脚本统一使用EleutherAI发布的lm-evaluation-harness,commit hash为[28ec7fa](https://github.com/EleutherAI/lm-evaluation-harness/tree/28ec7fa950346b5a895e85e1f3edd5648168acc4)。 ### 模型生成效果 下表为各个扩词表模型的生成效果。由于部分模型的预训练语料未使用`eos_token`进行分隔,我们采用了`max_tokens = 100`对生成文本进行截断。我们的采样参数为`temperature = 0.8, top_p = 0.9`。 ![](./img/case.png) ### 中文编解码效率 针对中文编解码效率,我们使用各个扩词表模型的分词器对[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B)数据集的一个切片(2023-06_zh_head_0000.jsonl)进行编码,对比了各个分词器输出的中文文本Token量: | 模型名称 | 模型类别 | 词表大小 | 中文文本Token量 | 编解码效率 | |:----------------------------------:|:-------:|:-----:|:----------:|:-------:| | meta-llama/Llama-2-13B-hf | LLaMA | 32000 | 780M | 低 | | mistralai/Mixtral-8x7B-v0.1 | Mixtral | 32000 | 606M | 低 | | Linly-AI/Chinese-LLaMA-2-13B-hf | LLaMA | 40076 | 532M | 中 | | IDEA-CCNL/Ziya2-13B-Base | LLaMA | 39424 | 532M | 中 | | hfl/chinese-llama-2-13b | LLaMA | 55296 | 365M | 高 |、 | TigerResearch/tigerbot-13b-base-v3 | LLaMA | 65112 | 342M | 高 | | **Chinese-Mixtral-8x7B(本项目)** | Mixtral | 57000 | 355M | 高 | 在约1.4GB的测试文本中,我们的Chinese-Mixtral-8x7B中文编解码效率仅次于TigerBot-13B-Base-v3,较原模型提高了41.5%。这有利于加速中文文本的推理速度,并在In-Context Learning、Chain-of-Thought等场景中节省序列长度,有利于提高复杂推理任务的性能。 ## ⚙️ 训练细节 <details> <summary> ### 词表扩充 </summary> 我们使用`sentencepiece`在12G知乎数据和2G悟道数据上训练中文BPE词表。我们在训练词表时分别枚举了中文单字Token数量以及中文总Token数量,并对二者进行组合,得到了数百个大小、内容各异的词表。为了得到最适合的词表,我们通过Zheng Bo等人提出的[ALP](https://arxiv.org/pdf/2109.07306.pdf)计算这些词表的中文词汇能力。ALP通过计算特定语言的子词切分粒度,并对词表的中低频子词进行惩罚,是一种方便快捷的衡量特定语言词汇能力的指标。 我们在书籍和百科语料上评估了不同词表的ALP值。图示中,四条曲线分别代表四种中文单字Token数量的词表(4451、5435、6414和7434)。为了避免词表过小导致中文压缩率过低,以及词表过大导致embedding层过于稀疏,我们选取ALP曲线的拐点,对应向词表中新增25000个中文Token。在此基础上,我们选择了四条曲线中ALP最大者,即新增6414个中文单字Token的词表,作为最终Chinese-Mixtral-8x7B选用的词表。 ![](./img/alp.png) 在获得新词表后,我们需要对embedding和lm_head层进行扩充和初始化。我们使用新Token在旧embedding层中的词嵌入平均值对扩充部分进行初始化。在我们的前期实验中,这种方法略优于HuggingFace的默认实现,即使用固定的正态分布进行初始化。 </details> <details> <summary> ### 增量预训练 </summary> Mixtral-8x7B模型参数量为46.7B,全参数训练需要同时使用多种并行策略,在训练资源受限的情况下时间成本过高。因此我们采用HuggingFace官方推荐的方法,使用QLoRA对模型进行训练。QLoRA在LoRA低秩分解的基础上,通过引入4位量化、双重量化和利用NVIDIA统一内存进行分页,进一步减少了训练所需显存,同时保持了与全参数训练相当的性能。 我们参考Yiming Cui等人[对LoRA的设置](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/blob/main/scripts/training/run_pt.sh),对原模型所有Linear层应用低秩分解,并将扩增后的embedding和lm_head层的参数设置为可训练。对于模型主体,我们采用NF4格式进行量化,这种格式可以使得量化后的数据与量化前具有同等的数据分布,模型的权重信息损失更少。 #### 环境准备 我们建议使用Python 3.10 + torch 2.0.1 ```shell # Pytorch + Transformers $ pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 $ pip install transformers==4.36.2 datasets evaluate peft accelerate gradio optimum sentencepiece $ pip install jupyterlab scikit-learn pandas matplotlib tensorboard nltk rouge bitsandbytes fire # DeepSpeed $ git clone https://github.com/microsoft/DeepSpeed.git $ cd DeepSpeed $ DS_BUILD_FUSED_ADAM=1 pip3 install . # Flash Attention $ pip install flash-attn --no-build-isolation ``` #### 数据集下载 我们基于现有的开源数据集训练了Chinese-Mixtral-8x7B,数据集包括: | 数据集名称 | 数据集语言 |使用数据量| 备注 | |:----------------------------------------------------------------------------:|:-----:|:----------------:|:-----:| | [Skywork/SkyPile-150B](https://huggingface.co/datasets/Skywork/SkyPile-150B) | 中文 |30B| 仅使用2022 + 2023年的数据 | | [DKYoon/SlimPajama-6B](https://huggingface.co/datasets/DKYoon/SlimPajama-6B) | 英文 |12B| 数据集重复2 Epoch | 通过`data/download.py`将数据集下载到`data`中。针对Slimpajama数据集,需要使用`data/parquet2jsonl.py`将原始数据集转换为`jsonl`格式。 下载后的数据集为多个jsonl文件的分片,使用`cat`将多个分片合并为一个jsonl文件。 ```shell $ cat *.jsonl > all.jsonl ``` 通过`split`将jsonl切分为train和valid集合。本项目中train和valid的行数比例为999:1。 ```shell $ wc -l all.jsonl # 计算数据集总行数 $ split -l <lines> all.jsonl # 按999:1计算train/valid行数,进行切分 $ mv xaa DKYoon-SlimPajama-6B-train.jsonl # 重命名 $ mv xab DKYoon-SlimPajama-6B-dev.jsonl ``` #### 数据集预处理 将数据集名称和路径注册到`data/datasets.toml`中: ```toml [DKYoon-SlimPajama-6B] # 数据集名称 splits = ["train", "dev"] # 数据集train/valid集合 root = "{DATA_DIR}/en/{name}" # 数据集根目录 doc = "{name}-{split}" # 数据集文件名 encoded = "encoded-{name}-{split}" # 预处理保存位置 ``` 使用`data/preprocess_datasets.py`对数据集进行子词切分,从而加快训练速度。 ```shell $ python data/preprocess_datasets.py --ds_name SkyPile-150B-2023 --tokenizer_name_or_path tokenizer/Mixtral-8x7B-v0.1-vocab $ python data/preprocess_datasets.py --ds_name DKYoon-SlimPajama-6B --tokenizer_name_or_path tokenizer/Mixtral-8x7B-v0.1-vocab ``` 在进行子词切分后,可以使用`data/utils.py`查看各个数据集的token总量: ```shell $ python data/utils.py ``` #### 开始训练 训练启动脚本为`scripts/train.sh`。可以通过修改其中的`TRAIN_DATASETS`修改训练数据集和数据集比例: ```shell TRAIN_DATASETS=( 1:SkyPile-150B-2022 # 使用全量SkyPile-150B-2022 0.1:SkyPile-150B-2023 # 使用SkyPile-150B-2023的10%数据 1:DKYoon-SlimPajama-6B # 使用全量DKYoon-SlimPajama-6B ) ``` 如果您使用SLURM集群管理系统,可以通过`sbatch`进行提交: ```shell $ sbatch scripts/train.sh ``` 如果没有SLURM或希望通过命令行启动训练,您可以直接提取`scripts/train.sh`中的`torchrun`开始训练。 </details> <details> <summary> ### 微调 </summary> 本项目发布的Chinese-Mixtral-8x7B为基座模型,没有经过微调。如果您希望使用Chinese-Mixtral-8x7B进行下游任务微调或SFT,可以参考HuggingFace给出Mixtral-8x7B的QLoRA微调脚本进行训练:[HuggingFace的官方示例代码](https://github.com/huggingface/trl/blob/main/examples/scripts/sft.py)。 </details> ## ✒️ 引用 如果您觉得本项目对您的研究有所帮助或使用了本项目的代码,请引用本项目: ```bibtex @misc{Chinese-Mixtral-8x7B, author = {HIT-SCIR}, title = {Chinese-Mixtral-8x7B: An Open-Source Mixture-of-Experts LLM}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/HIT-SCIR/Chinese-Mixtral-8x7B}} } ``` ## 🌟 Star History [![Star History Chart](https://api.star-history.com/svg?repos=HIT-SCIR/Chinese-Mixtral-8x7B&type=Date)](https://star-history.com/#HIT-SCIR/Chinese-Mixtral-8x7B&Date)
LoneStriker/Code-290k-13B-4.0bpw-h6-exl2
LoneStriker
2024-01-17T07:27:17Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T07:24:24Z
--- license: cc-by-nc-nd-4.0 datasets: - ajibawa-2023/Code-290k-ShareGPT language: - en tags: - code --- **Code-290k-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around **290000** set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. It is built upon using my existing Datasets [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT) and [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) . This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of the model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Will update soon.
aixsatoshi/calm2-7b-chat-7b-moe
aixsatoshi
2024-01-17T07:24:18Z
10
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T05:38:18Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model represents an advanced implementation of a Mixture of Experts (MoE) approach, where cyberagent/calm2-7b serves as the foundational base model, and cyberagent/calm2-7b-chat is incorporated as an chat model. The model is designed to combine the general-purpose language processing capabilities of the calm2-7b with the specialized conversational abilities of the calm2-7b-chat. ## Model Details The model uses the following expert models for generating responses: 1. **Source Model**: `cyberagent/calm2-7b-chat` - **Positive Prompts**: ["USER: ", "ASSISTANT: "] - This source model is utilized to provide responses in a chat-based context, taking both user and assistant inputs into account. 2. **Source Model**: `cyberagent/calm2-7b` - **Positive Prompts**: [""] - This source model contributes to generating responses without specific chat context, serving as a general-purpose language model. Model size: 11.3B\ Context length: 32768\ Language(s): Japanese, English ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/cyberagent/calm2-7b - **Repository:** https://huggingface.co/cyberagent/calm2-7b-chat ### Limitations and Considerations While this MoE model integrates the strengths of cyberagent/calm2-7b-chat and cyberagent/calm2-7b, it's important to note that it is an experimental model and has not been fine-tuned post-composition. As such, users are advised to perform their own tuning and optimization to adapt the model to their specific use cases and requirements.
rooban2005/the-tiger
rooban2005
2024-01-17T07:23:35Z
19
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-15T14:29:55Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### The-Tiger Dreambooth model trained by rooban2005 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 22TD0385 Sample pictures of this concept: ![0](https://huggingface.co/rooban2005/the-tiger/resolve/main/sample_images/ddcb51397816412f91838f539f8b895f.png) ![1](https://huggingface.co/rooban2005/the-tiger/resolve/main/sample_images/11ef991a98e34f968294ac363aac7b46.png) ![2](https://huggingface.co/rooban2005/the-tiger/resolve/main/sample_images/c078edb64c1a4f7ea16ef05bd8246915.png) ![3](https://huggingface.co/rooban2005/the-tiger/resolve/main/sample_images/tiger-standing-on-the-water.png) ![4](https://huggingface.co/rooban2005/the-tiger/resolve/main/sample_images/aa17244c6c364f7888aff5fe295a436c.png)
Hongsong/CHS_FrozenLake
Hongsong
2024-01-17T07:20:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T07:19:18Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: CHS_FrozenLake results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false ---
LoneStriker/Code-290k-13B-3.0bpw-h6-exl2
LoneStriker
2024-01-17T07:18:38Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T07:16:23Z
--- license: cc-by-nc-nd-4.0 datasets: - ajibawa-2023/Code-290k-ShareGPT language: - en tags: - code --- **Code-290k-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around **290000** set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. It is built upon using my existing Datasets [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT) and [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) . This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of the model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Will update soon.
ntc-ai/SDXL-LoRA-slider.charming
ntc-ai
2024-01-17T07:18:20Z
21
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-17T07:18:17Z
--- language: - en thumbnail: "images/evaluate/charming.../charming_17_3.0.png" widget: - text: charming output: url: images/charming_17_3.0.png - text: charming output: url: images/charming_19_3.0.png - text: charming output: url: images/charming_20_3.0.png - text: charming output: url: images/charming_21_3.0.png - text: charming output: url: images/charming_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "charming" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - charming (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/charming_17_-3.0.png" width=256 height=256 /> | <img src="images/charming_17_0.0.png" width=256 height=256 /> | <img src="images/charming_17_3.0.png" width=256 height=256 /> | | <img src="images/charming_19_-3.0.png" width=256 height=256 /> | <img src="images/charming_19_0.0.png" width=256 height=256 /> | <img src="images/charming_19_3.0.png" width=256 height=256 /> | | <img src="images/charming_20_-3.0.png" width=256 height=256 /> | <img src="images/charming_20_0.0.png" width=256 height=256 /> | <img src="images/charming_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` charming ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.charming', weight_name='charming.safetensors', adapter_name="charming") # Activate the LoRA pipe.set_adapters(["charming"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, charming" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
MaziyarPanahi/speechless-mistral-six-in-one-7b-orth-1.0-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T07:06:40Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "uukuguy/speechless-mistral-six-in-one-7b-orth-1.0", "pytorch", "code", "en", "dataset:jondurbin/airoboros-2.2.1", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:ehartford/samantha-data", "dataset:CollectiveCognition/chats-data-2023-09-27", "dataset:stingning/ultrachat", "arxiv:2310.06825", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T07:01:49Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - uukuguy/speechless-mistral-six-in-one-7b-orth-1.0 - transformers - pytorch - mistral - text-generation - code - en - dataset:jondurbin/airoboros-2.2.1 - dataset:Open-Orca/OpenOrca - dataset:garage-bAInd/Open-Platypus - dataset:ehartford/samantha-data - dataset:CollectiveCognition/chats-data-2023-09-27 - dataset:stingning/ultrachat - arxiv:2310.06825 - license:apache-2.0 - model-index - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # speechless-mistral-six-in-one-7b-orth-1.0-Mistral-7B-Instruct-v0.1 speechless-mistral-six-in-one-7b-orth-1.0-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [uukuguy/speechless-mistral-six-in-one-7b-orth-1.0](https://huggingface.co/uukuguy/speechless-mistral-six-in-one-7b-orth-1.0) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: uukuguy/speechless-mistral-six-in-one-7b-orth-1.0 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/speechless-mistral-six-in-one-7b-orth-1.0-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
kavanatn/submerged-heaven
kavanatn
2024-01-17T07:06:17Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-17T07:02:15Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Submerged-Heaven Dreambooth model trained by kavanatn following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 4BD22Cs066 Sample pictures of this concept: ![0](https://huggingface.co/kavanatn/submerged-heaven/resolve/main/sample_images/xzg_(1).png)
zhangyanchao/whisper-small-hi-v2
zhangyanchao
2024-01-17T07:01:31Z
5
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-17T02:15:15Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Hi - Sanchit Gandhi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T06:37:32Z
23
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "uukuguy/speechless-code-mistral-7b-v2.0", "pytorch", "code", "en", "dataset:jondurbin/airoboros-2.2", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:TokenBender/python_eval_instruct_51k", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:meta-math/MetaMathQA", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T06:32:30Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - uukuguy/speechless-code-mistral-7b-v2.0 - transformers - pytorch - mistral - text-generation - code - en - dataset:jondurbin/airoboros-2.2 - dataset:Open-Orca/OpenOrca - dataset:garage-bAInd/Open-Platypus - dataset:WizardLM/WizardLM_evol_instruct_V2_196k - dataset:TokenBender/python_eval_instruct_51k - dataset:ise-uiuc/Magicoder-OSS-Instruct-75K - dataset:meta-math/MetaMathQA - license:apache-2.0 - model-index - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.1 speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [uukuguy/speechless-code-mistral-7b-v2.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v2.0) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: uukuguy/speechless-code-mistral-7b-v2.0 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
stanpony/medical-diagnosis-classifier
stanpony
2024-01-17T06:17:26Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "region:us" ]
null
2024-01-17T00:14:27Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: medical-diagnosis-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medical-diagnosis-classifier This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8761 - Accuracy: 0.5812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.915 | 0.44 | 5000 | 0.9458 | 0.5413 | | 0.9446 | 0.87 | 10000 | 0.9111 | 0.5734 | | 0.9701 | 1.31 | 15000 | 0.9020 | 0.5728 | | 1.0364 | 1.75 | 20000 | 0.9053 | 0.5746 | | 1.0566 | 2.18 | 25000 | 0.8934 | 0.5723 | | 0.7617 | 2.62 | 30000 | 0.8903 | 0.5697 | | 0.8615 | 3.06 | 35000 | 0.8825 | 0.5886 | | 0.8974 | 3.49 | 40000 | 0.8896 | 0.5760 | | 0.877 | 3.93 | 45000 | 0.8854 | 0.5827 | | 0.8099 | 4.37 | 50000 | 0.8864 | 0.5754 | | 0.8527 | 4.8 | 55000 | 0.8825 | 0.5853 | | 0.892 | 5.24 | 60000 | 0.8869 | 0.5714 | | 1.0117 | 5.68 | 65000 | 0.8835 | 0.5780 | | 0.8814 | 6.11 | 70000 | 0.8770 | 0.5812 | | 1.0064 | 6.55 | 75000 | 0.8845 | 0.5771 | | 0.9091 | 6.99 | 80000 | 0.8837 | 0.5740 | | 0.8869 | 7.42 | 85000 | 0.8780 | 0.5839 | | 0.9656 | 7.86 | 90000 | 0.8916 | 0.5668 | | 0.8205 | 8.3 | 95000 | 0.8767 | 0.5855 | | 0.9256 | 8.73 | 100000 | 0.8772 | 0.5840 | | 0.8649 | 9.17 | 105000 | 0.8769 | 0.5824 | | 0.9214 | 9.61 | 110000 | 0.8761 | 0.5812 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
computational-mama-research/tired-mom-octos
computational-mama-research
2024-01-17T06:14:47Z
343
2
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-17T06:14:23Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: a woman laying in a bed with a large white snake in the style of <s0><s1> output: url: image-0.png - text: a group of doctors and nurses in a hospital room in the style of <s0><s1> output: url: image-1.png - text: a woman in a lab coat standing next to a machine in the style of <s0><s1> output: url: image-2.png - text: a group of people standing around a table with some old radios in the style of <s0><s1> output: url: image-3.png - text: a group of people standing around a light in a room in the style of <s0><s1> output: url: image-4.png - text: a group of people in an office with computers in the style of <s0><s1> output: url: image-5.png - text: a group of doctors and nurses in a hospital room in the style of <s0><s1> output: url: image-6.png - text: a black and white photo of doctors in a hospital room in the style of <s0><s1> output: url: image-7.png - text: a black and white photo of doctors in a hospital in the style of <s0><s1> output: url: image-8.png - text: a woman laying on her stomach with her eyes closed in the style of <s0><s1> output: url: image-9.png - text: a woman standing in a hospital room with a bed in the style of <s0><s1> output: url: image-10.png - text: a woman in a hospital bed with a light shining on her in the style of <s0><s1> output: url: image-11.png - text: a man in a hospital bed in a glass egg in the style of <s0><s1> output: url: image-12.png - text: a woman sitting in a bubble in a hospital room in the style of <s0><s1> output: url: image-13.png - text: a woman sitting in front of a row of ovens in the style of <s0><s1> output: url: image-14.png - text: a woman wearing headphones in the style of <s0><s1> output: url: image-15.png - text: a woman sitting on a chair in a room in the style of <s0><s1> output: url: image-16.png - text: a woman wearing a helmet in a room in the style of <s0><s1> output: url: image-17.png - text: a woman sitting in a chair with a head covering in the style of <s0><s1> output: url: image-18.png - text: a woman and child sitting at a desk with an octopus in the style of <s0><s1> output: url: image-19.png - text: a woman and child sitting at a typewriter in the style of <s0><s1> output: url: image-20.png - text: a woman holding a baby and octopus on a typewriter in the style of <s0><s1> output: url: image-21.png - text: a woman holding a typewriter and octopus in the style of <s0><s1> output: url: image-22.png - text: a woman holding a baby and a typewriter in the style of <s0><s1> output: url: image-23.png - text: a woman and child sitting at a typewriter with a large octopus in the style of <s0><s1> output: url: image-24.png - text: a woman with octopus tentacles on her typewriter in the style of <s0><s1> output: url: image-25.png - text: a woman and child sitting at a desk with octopus tentacles in the style of <s0><s1> output: url: image-26.png - text: a woman and a child working on a typewriter in the style of <s0><s1> output: url: image-27.png - text: a woman and child sitting at a desk with an old typewriter in the style of <s0><s1> output: url: image-28.png - text: a woman and a child sitting at a table with an old typewriter in the style of <s0><s1> output: url: image-29.png - text: a woman and a child sitting at a typewriter in the style of <s0><s1> output: url: image-30.png - text: a woman sitting at a desk with octopus tentacles on her head in the style of <s0><s1> output: url: image-31.png - text: a woman sitting at a desk with octopus tentacles on her desk in the style of <s0><s1> output: url: image-32.png - text: a woman sitting at a desk with a typewriter in the style of <s0><s1> output: url: image-33.png - text: a woman sitting at a desk with a typewriter and octopus in the style of <s0><s1> output: url: image-34.png - text: a woman and a boy sitting at a desk with a typewriter in the style of <s0><s1> output: url: image-35.png - text: a woman and a child holding a typewriter in front of an octopus in the style of <s0><s1> output: url: image-36.png - text: a woman and child sitting at a desk with a giant octopus in the style of <s0><s1> output: url: image-37.png - text: a woman and child sitting at a desk with octopus tentacles in the style of <s0><s1> output: url: image-38.png - text: a woman and child sitting at a table with an octopus on the table in the style of <s0><s1> output: url: image-39.png - text: a woman and a baby sitting at a table with octopus toys in the style of <s0><s1> output: url: image-40.png - text: a woman holding a baby and a laptop in the style of <s0><s1> output: url: image-41.png - text: a woman and a child sitting at a table with octopus tentacles in the style of <s0><s1> output: url: image-42.png - text: a woman holding a baby and a typewriter in the style of <s0><s1> output: url: image-43.png - text: a woman and a child sitting at a desk with octopus tentacles in the style of <s0><s1> output: url: image-44.png - text: a woman holding a baby and a typewriter in the style of <s0><s1> output: url: image-45.png - text: a woman sitting at a desk with an octopus on her lap in the style of <s0><s1> output: url: image-46.png - text: a woman and child sitting at a table with a large octopus in the style of <s0><s1> output: url: image-47.png - text: a woman sitting at a desk with a typewriter and a bunch of snakes in the style of <s0><s1> output: url: image-48.png - text: a woman and child sitting at a desk with a typewriter in the style of <s0><s1> output: url: image-49.png - text: a woman sitting at a desk with a typewriter in the style of <s0><s1> output: url: image-50.png - text: a woman sitting at a desk with a typewriter in the style of <s0><s1> output: url: image-51.png - text: a woman and child sitting at a typewriter with a large octopus in the style of <s0><s1> output: url: image-52.png - text: a woman with octopus tentacles on her typewriter in the style of <s0><s1> output: url: image-53.png - text: a woman and child sitting at a table with a large octopus in the style of <s0><s1> output: url: image-54.png - text: a woman holding a baby and a laptop in the style of <s0><s1> output: url: image-55.png - text: a woman holding a baby and a typewriter in the style of <s0><s1> output: url: image-56.png - text: a woman and child sitting at a desk with octopus tentacles in the style of <s0><s1> output: url: image-57.png - text: a woman holding a baby and a typewriter in the style of <s0><s1> output: url: image-58.png - text: a woman and a boy sitting at a desk with an octopus on the desk in the style of <s0><s1> output: url: image-59.png - text: a woman sitting at a desk with a typewriter and an octopus in the style of <s0><s1> output: url: image-60.png - text: a woman sitting at a typewriter with a snake on her lap in the style of <s0><s1> output: url: image-61.png - text: a woman sitting at a desk with a typewriter and a snake in the style of <s0><s1> output: url: image-62.png - text: a black and white photo of a hospital room in the style of <s0><s1> output: url: image-63.png - text: a black and white photo of two doctors in a hospital room in the style of <s0><s1> output: url: image-64.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: in the style of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - computational-mama/tired-mom-octos <Gallery /> ## Model description ### These are computational-mama/tired-mom-octos LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`tired-mom-octos.safetensors` here 💾](/computational-mama/tired-mom-octos/blob/main/tired-mom-octos.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:tired-mom-octos:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`tired-mom-octos_emb.safetensors` here 💾](/computational-mama/tired-mom-octos/blob/main/tired-mom-octos_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `tired-mom-octos_emb` to your prompt. For example, `in the style of tired-mom-octos_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('computational-mama/tired-mom-octos', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='computational-mama/tired-mom-octos', filename='tired-mom-octos_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('in the style of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/computational-mama/tired-mom-octos/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
MaziyarPanahi/CollectiveCognition-v1-Mistral-7B-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T06:08:13Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "teknium/CollectiveCognition-v1-Mistral-7B", "pytorch", "mistral-7b", "instruct", "finetune", "gpt4", "synthetic data", "distillation", "sharegpt", "en", "dataset:CollectiveCognition/chats-data-2023-09-27", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational" ]
text-generation
2024-01-17T06:03:09Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - teknium/CollectiveCognition-v1-Mistral-7B - transformers - pytorch - mistral - text-generation - mistral-7b - instruct - finetune - gpt4 - synthetic data - distillation - sharegpt - en - dataset:CollectiveCognition/chats-data-2023-09-27 - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # CollectiveCognition-v1-Mistral-7B-Mistral-7B-Instruct-v0.1 CollectiveCognition-v1-Mistral-7B-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [teknium/CollectiveCognition-v1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: teknium/CollectiveCognition-v1-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/CollectiveCognition-v1-Mistral-7B-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
0xFE00/dqn-SpaceInvadersNoFrameskip-v4
0xFE00
2024-01-17T06:02:35Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T06:02:02Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 577.00 +/- 183.03 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 0xFE00 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 0xFE00 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga 0xFE00 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Amanaccessassist/Mistal7-sent
Amanaccessassist
2024-01-17T05:58:13Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b", "base_model:adapter:unsloth/mistral-7b", "region:us" ]
null
2024-01-17T05:56:51Z
--- library_name: peft base_model: unsloth/mistral-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/Rabbit-7B-DPO-Chat-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T05:57:47Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "viethq188/Rabbit-7B-DPO-Chat", "pytorch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T05:53:00Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - viethq188/Rabbit-7B-DPO-Chat - transformers - pytorch - mistral - text-generation - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # Rabbit-7B-DPO-Chat-Mistral-7B-Instruct-v0.1 Rabbit-7B-DPO-Chat-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [viethq188/Rabbit-7B-DPO-Chat](https://huggingface.co/viethq188/Rabbit-7B-DPO-Chat) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: viethq188/Rabbit-7B-DPO-Chat layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Rabbit-7B-DPO-Chat-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
hammad117/falcon-7b-instruct-ft-adapters
hammad117
2024-01-17T05:57:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-16T10:49:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asude55/youtube-da25
asude55
2024-01-17T05:53:08Z
7
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dbmdz/bert-base-turkish-cased", "base_model:finetune:dbmdz/bert-base-turkish-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-07T08:10:30Z
--- license: mit base_model: dbmdz/bert-base-turkish-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: emotion-turkish16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-turkish16 This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2711 - Accuracy: 0.9143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 71 | 0.3386 | 0.8857 | | No log | 2.0 | 142 | 0.2711 | 0.9143 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T05:49:12Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "uukuguy/speechless-code-mistral-orca-7b-v1.0", "pytorch", "llama-2", "code", "en", "dataset:jondurbin/airoboros-2.2", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:TokenBender/python_eval_instruct_51k", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational", "license:apache-2.0" ]
text-generation
2024-01-17T05:44:14Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - uukuguy/speechless-code-mistral-orca-7b-v1.0 - transformers - pytorch - mistral - text-generation - llama-2 - code - en - dataset:jondurbin/airoboros-2.2 - dataset:Open-Orca/OpenOrca - dataset:garage-bAInd/Open-Platypus - dataset:WizardLM/WizardLM_evol_instruct_V2_196k - dataset:TokenBender/python_eval_instruct_51k - license:llama2 - model-index - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.1 speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [uukuguy/speechless-code-mistral-orca-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: uukuguy/speechless-code-mistral-orca-7b-v1.0 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
varundataeaze/vit-base-patch16-224-in21k-finetuned-lora-food101
varundataeaze
2024-01-17T05:46:34Z
0
0
peft
[ "peft", "safetensors", "vit", "arxiv:1910.09700", "base_model:google/vit-base-patch16-224-in21k", "base_model:adapter:google/vit-base-patch16-224-in21k", "region:us" ]
null
2024-01-16T10:59:39Z
--- library_name: peft base_model: google/vit-base-patch16-224-in21k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LarryAIDraw/asagiMutsukiV1
LarryAIDraw
2024-01-17T05:44:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-17T05:42:02Z
--- license: creativeml-openrail-m --- https://civitai.com/models/12372/asagi-mutsuki-lora
smangrul/tinyllama_lora_adcopy
smangrul
2024-01-17T05:42:26Z
100
0
peft
[ "peft", "tensorboard", "safetensors", "trl-sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-01-17T05:34:46Z
--- license: apache-2.0 library_name: peft tags: - trl-sft - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: tinyllama_lora_adcopy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama_lora_adcopy This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9386 | 1.0 | 129 | 0.8662 | | 0.7821 | 2.0 | 258 | 0.7954 | | 0.5269 | 3.0 | 387 | 0.7621 | | 0.4121 | 4.0 | 516 | 0.7183 | | 0.2169 | 5.0 | 645 | 0.7358 | | 0.1206 | 6.0 | 774 | 0.7757 | | 0.057 | 7.0 | 903 | 0.8003 | | 0.0291 | 8.0 | 1032 | 0.8342 | | 0.0097 | 9.0 | 1161 | 0.8800 | | 0.0077 | 10.0 | 1290 | 0.8992 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
smangrul/tinyllama_lora_norobots
smangrul
2024-01-17T05:41:01Z
297
0
peft
[ "peft", "tensorboard", "safetensors", "trl-sft", "generated_from_trainer", "dataset:generator", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-01-17T05:33:51Z
--- license: apache-2.0 library_name: peft tags: - trl-sft - generated_from_trainer datasets: - generator base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: tinyllama_lora_norobots results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama_lora_norobots This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.9106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8966 | 1.0 | 98 | 1.9106 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
LarryAIDraw/YukinoshitaYukino
LarryAIDraw
2024-01-17T05:40:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-17T05:32:48Z
--- license: creativeml-openrail-m --- https://civitai.com/models/265688/yukinoshita-yukino
LarryAIDraw/mira_tsubakihara_masterpiece-KK77-V2
LarryAIDraw
2024-01-17T05:40:42Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-17T05:32:03Z
--- license: creativeml-openrail-m --- https://civitai.com/models/42286?modelVersionId=145223
iyadycb/malaysian-mistral-7b-32k-instructions-v3.5-GGUF
iyadycb
2024-01-17T05:40:42Z
30
0
null
[ "gguf", "ms", "base_model:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5", "base_model:quantized:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-16T18:18:18Z
--- base_model: mesolitica/malaysian-mistral-7b-32k-instructions-v3.5 language: - ms --- # malaysian-mistral-7b-32k-instructions-v3.5 - GGUF - Model creator: [Mesolitica](https://huggingface.co/mesolitica) - Original model: [malaysian-mistral-7b-32k-instructions-v3.5](https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v3.5)
TitanTec/poca-SoccerTwos
TitanTec
2024-01-17T05:32:25Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-01-17T05:31:41Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: TitanTec/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
raj-p/bert-finetuned-ner
raj-p
2024-01-17T05:31:24Z
3
1
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-17T05:19:30Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: raj-p/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # raj-p/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0273 - Validation Loss: 0.0522 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1806 | 0.0606 | 0 | | 0.0464 | 0.0540 | 1 | | 0.0273 | 0.0522 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
kykim0/Llama-2-7b-ultrachat200k-3e
kykim0
2024-01-17T05:31:00Z
9
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T06:59:51Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: Llama-2-7b-hf-sft-full-3e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-sft-full-3e This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.931 | 0.7 | 285 | 0.9350 | | 0.8672 | 1.7 | 570 | 0.9245 | | 0.8189 | 2.7 | 855 | 0.9248 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.0
minlno/q-FrozenLake-v1-4x4-noSlippery
minlno
2024-01-17T05:29:11Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T05:29:09Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="minlno/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CLMBR/old-full-lstm-3
CLMBR
2024-01-17T05:24:49Z
31
0
transformers
[ "transformers", "pytorch", "rnn", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-12T15:11:58Z
--- tags: - generated_from_trainer model-index: - name: full-lstm-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full-lstm-3 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.7832 | 0.03 | 76319 | 4.7485 | | 4.5027 | 0.03 | 152638 | 4.4708 | | 4.3608 | 0.03 | 228957 | 4.3370 | | 4.2693 | 1.03 | 305276 | 4.2557 | | 4.2052 | 0.03 | 381595 | 4.2002 | | 4.1539 | 1.03 | 457914 | 4.1587 | | 4.1188 | 0.03 | 534233 | 4.1278 | | 4.0903 | 0.03 | 610552 | 4.1043 | | 4.0598 | 1.03 | 686871 | 4.0848 | | 4.036 | 0.03 | 763190 | 4.0675 | | 4.0172 | 1.03 | 839509 | 4.0550 | | 4.001 | 0.03 | 915828 | 4.0447 | | 3.9809 | 0.03 | 992147 | 4.0355 | | 3.9667 | 0.03 | 1068467 | 4.0263 | | 3.9546 | 1.03 | 1144787 | 4.0188 | | 3.9525 | 0.03 | 1221107 | 4.0124 | | 3.9332 | 1.03 | 1297427 | 4.0074 | | 3.9251 | 0.03 | 1373747 | 4.0028 | | 3.9148 | 1.03 | 1450067 | 3.9989 | | 3.9065 | 0.03 | 1526387 | 3.9954 | | 3.9044 | 1.03 | 1602707 | 3.9925 | | 3.8995 | 0.03 | 1679027 | 3.9900 | | 3.8994 | 0.03 | 1755347 | 3.9872 | | 3.895 | 1.03 | 1831667 | 3.9849 | | 3.8861 | 0.03 | 1907987 | 3.9832 | | 3.8793 | 1.03 | 1984307 | 3.9809 | | 3.8748 | 0.03 | 2060627 | 3.9785 | | 3.8675 | 1.03 | 2136947 | 3.9774 | | 3.8656 | 0.03 | 2213267 | 3.9760 | | 3.8586 | 0.03 | 2289587 | 3.9746 | | 3.8518 | 1.03 | 2365907 | 3.9738 | | 3.85 | 0.03 | 2442227 | 3.9729 | | 3.8407 | 1.03 | 2518547 | 3.9720 | | 3.8388 | 0.03 | 2594867 | 3.9711 | | 3.8321 | 1.03 | 2671187 | 3.9704 | | 3.8326 | 0.03 | 2747507 | 3.9700 | | 3.8354 | 0.03 | 2823827 | 3.9696 | | 3.8349 | 1.03 | 2900147 | 3.9691 | | 3.8397 | 0.03 | 2976467 | 3.9687 | | 3.8387 | 0.02 | 3052726 | 3.9685 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
MochaPixel/RealPerson
MochaPixel
2024-01-17T05:20:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-17T04:56:43Z
--- license: creativeml-openrail-m ---
USYSTN/FirstModel
USYSTN
2024-01-17T05:19:54Z
0
0
adapter-transformers
[ "adapter-transformers", "zero-shot-classification", "en", "dataset:wikimedia/wikipedia", "license:apache-2.0", "region:us" ]
zero-shot-classification
2024-01-17T05:08:48Z
--- license: apache-2.0 datasets: - wikimedia/wikipedia language: - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: zero-shot-classification ---
neenax/finetuneWizardLM13B-explanation-v1
neenax
2024-01-17T05:16:19Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:WizardLMTeam/WizardLM-13B-V1.2", "base_model:adapter:WizardLMTeam/WizardLM-13B-V1.2", "license:llama2", "region:us" ]
null
2024-01-17T05:16:13Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: WizardLM/WizardLM-13B-V1.2 model-index: - name: finetuneWizardLM13B-explanation-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuneWizardLM13B-explanation-v1 This model is a fine-tuned version of [WizardLM/WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T05:15:48Z
22
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "cognitivecomputations/samantha-mistral-instruct-7b", "pytorch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T05:10:50Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - cognitivecomputations/samantha-mistral-instruct-7b - transformers - pytorch - mistral - text-generation - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.1 samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [cognitivecomputations/samantha-mistral-instruct-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-instruct-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: cognitivecomputations/samantha-mistral-instruct-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
socks22/ppo-lunarlandar-my-own
socks22
2024-01-17T05:07:32Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T04:57:25Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -167.00 +/- 60.60 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'socks22/ppo-lunarlandar-my-own' 'batch_size': 512 'minibatch_size': 128} ```
XinHun/The_Eminence_in_Shadow
XinHun
2024-01-17T05:03:27Z
0
1
null
[ "license:other", "region:us" ]
null
2024-01-17T05:01:33Z
--- license: other license_name: '001' license_link: LICENSE ---
Byungchae/test1
Byungchae
2024-01-17T04:49:50Z
0
0
null
[ "ko", "license:cc-by-nc-4.0", "region:us" ]
null
2024-01-16T04:56:10Z
--- license: cc-by-nc-4.0 language: ko --- ## Developed by : Byungchae Song ## Model Number: k2s3_test_0001 ## Base Model : * [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) ### Training Data * in-house dataset ### Training Method * PEFT QLoRA
MaziyarPanahi/japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T04:46:18Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "stabilityai/japanese-stablelm-base-gamma-7b", "japanese-stablelm", "causal-lm", "ja", "dataset:wikipedia", "dataset:mc4", "dataset:cc100", "dataset:oscar-corpus/OSCAR-2301", "dataset:oscar-corpus/OSCAR-2201", "dataset:cerebras/SlimPajama-627B", "arxiv:2310.06825", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T04:41:20Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - stabilityai/japanese-stablelm-base-gamma-7b - transformers - safetensors - mistral - text-generation - japanese-stablelm - causal-lm - ja - dataset:wikipedia - dataset:mc4 - dataset:cc100 - dataset:oscar-corpus/OSCAR-2301 - dataset:oscar-corpus/OSCAR-2201 - dataset:cerebras/SlimPajama-627B - arxiv:2310.06825 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.1 japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: stabilityai/japanese-stablelm-base-gamma-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
tzs/q-Taxi-v3
tzs
2024-01-17T04:45:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T05:13:51Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="tzs/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
yeye776/t5-OndeviceAI-HomeIoT
yeye776
2024-01-17T04:40:31Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:paust/pko-t5-large", "base_model:finetune:paust/pko-t5-large", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-17T04:37:31Z
--- license: cc-by-4.0 base_model: paust/pko-t5-large tags: - generated_from_trainer model-index: - name: t5-OndeviceAI-HomeIoT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-OndeviceAI-HomeIoT This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
gianlab/swin-tiny-patch4-window7-224-finetuned-crop-classification
gianlab
2024-01-17T04:38:21Z
32
1
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-16T03:58:00Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-crop-classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7234369006520905 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-crop-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6957 - Accuracy: 0.7234 ## Model description This model was created by importing images of crop damage. I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb obtaining the following notebook: https://colab.research.google.com/drive/1qEskI6O-Jjv7UCanfQmUmzz8qUyg7FS3?usp=sharing The possible classified data are: Damage types | Damage | Definition | |-----------------|---------------------| | DR | Drought | | G | Good (growth) | | ND | Nutrient Deficient | | WD | Weed | | other | Disease, Pest, Wind | Crop example: ![Screenshot](crop.png) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7819 | 1.0 | 183 | 0.7262 | 0.7016 | | 0.7104 | 1.99 | 366 | 0.6957 | 0.7234 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
tzs/q-FrozenLake-v1-4x4-noSlippery
tzs
2024-01-17T04:37:41Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T04:37:38Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="tzs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
deepalweb/NIPZ_LORA
deepalweb
2024-01-17T04:37:05Z
1
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:gpl", "region:us" ]
text-to-image
2024-01-17T04:36:53Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "UNICODE\0\0w\0i\0n\0t\0e\0r\0,\0 \0l\0a\0n\0d\0s\0c\0a\0p\0e\0,\0 \0s\0n\0o\0w\0,\0 \0c\0o\0l\0d\0,\0 \0s\0n\0e\0e\0z\0i\0n\0g\0,\0 \0t\0r\0e\0m\0b\0l\0i\0n\0g\0,\0 \0s\0n\0o\0t\0,\0 \0b\0r\0e\0a\0t\0h\0,\0 \0r\0u\0n\0n\0y\0 \0n\0o\0s\0e\0,\0 \0o\0u\0t\0d\0o\0o\0r\0s\0,\0 \0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0m\0i\0l\0l\0a\0 \0m\0a\0x\0w\0e\0l\0l\0,\0 \0<\0l\0o\0r\0a\0:\0m\0i\0l\0l\0a\0:\01\0>\0,\0 \0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0" output: url: >- images/10447-2985372919-winter, landscape, snow, cold, sneezing, trembling, snot, breath, runny nose, outdoors, 1girl, solo, milla maxwell, _lora_milla.jpeg - text: "UNICODE\0\0w\0i\0n\0t\0e\0r\0,\0 \0l\0a\0n\0d\0s\0c\0a\0p\0e\0,\0 \0s\0n\0o\0w\0,\0 \0c\0o\0l\0d\0,\0 \0t\0r\0e\0m\0b\0l\0i\0n\0g\0,\0 \0s\0n\0o\0t\0,\0 \0b\0r\0e\0a\0t\0h\0,\0 \0r\0u\0n\0n\0y\0 \0n\0o\0s\0e\0,\0 \0o\0u\0t\0d\0o\0o\0r\0s\0,\0 \0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0m\0i\0l\0l\0a\0 \0m\0a\0x\0w\0e\0l\0l\0,\0 \0<\0l\0o\0r\0a\0:\0m\0i\0l\0l\0a\0:\01\0>\0,\0 \0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0" output: url: >- images/10445-2234743199-winter, landscape, snow, cold, trembling, snot, breath, runny nose, outdoors, 1girl, solo, milla maxwell, _lora_milla_1_.jpeg - text: "UNICODE\0\0c\0h\0e\0r\0r\0y\0 \0b\0l\0o\0s\0s\0o\0m\0s\0,\0 \0s\0p\0r\0i\0n\0g\0,\0 \0l\0a\0n\0d\0s\0c\0a\0p\0e\0,\0 \0p\0a\0r\0k\0,\0 \0h\0a\0p\0p\0y\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0m\0i\0l\0l\0a\0 \0m\0a\0x\0w\0e\0l\0l\0,\0 \0<\0l\0o\0r\0a\0:\0m\0i\0l\0l\0a\0:\01\0>\0,\0 \0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0" output: url: >- images/10444-661628697-cherry blossoms, spring, landscape, park, happy, 1girl, solo, milla maxwell, _lora_milla_1_.jpeg - text: "UNICODE\0\01\0g\0i\0r\0l\0,\0 \0m\0i\0l\0l\0a\0 \0m\0a\0x\0w\0e\0l\0l\0,\0 \0f\0i\0g\0h\0t\0i\0n\0g\0 \0s\0t\0a\0n\0c\0e\0,\0 \0s\0o\0l\0o\0,\0 \0f\0o\0r\0e\0s\0t\0,\0 \0<\0l\0o\0r\0a\0:\0m\0i\0l\0l\0a\0:\01\0>\0,\0 \0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0" output: url: >- images/10443-3911278388-1girl, milla maxwell, fighting stance, solo, forest, _lora_milla_1_.jpeg base_model: runwayml/stable-diffusion-v1-5 instance_prompt: anime license: gpl --- # NIPZ <Gallery /> ## Model description ![10447-2985372919-winter, landscape, snow, cold, sneezing, trembling, snot, breath, runny nose, outdoors, 1girl, solo, milla maxwell, _lora_milla.jpeg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;64ed78088a351f5b73a3d2a6&#x2F;Ex1MJ3nJXsnmSxqVbXWsd.jpeg) ![10445-2234743199-winter, landscape, snow, cold, trembling, snot, breath, runny nose, outdoors, 1girl, solo, milla maxwell, _lora_milla_1_.jpeg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;64ed78088a351f5b73a3d2a6&#x2F;4iUyUcA6s5uamqqNIZZEg.jpeg) ![10444-661628697-cherry blossoms, spring, landscape, park, happy, 1girl, solo, milla maxwell, _lora_milla_1_.jpeg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;64ed78088a351f5b73a3d2a6&#x2F;Cmug1yHTcHJiPLdvGYEdj.jpeg) ![10443-3911278388-1girl, milla maxwell, fighting stance, solo, forest, _lora_milla_1_.jpeg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;64ed78088a351f5b73a3d2a6&#x2F;eE9znmfp-NH9nqyeqMImq.jpeg) ## Trigger words You should use `anime` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/deepalweb/NIPZ_LORA/tree/main) them in the Files & versions tab.
coffeedjimmy/corgy_dog_LoRA
coffeedjimmy
2024-01-17T04:21:23Z
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-17T04:21:16Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog license: openrail++ --- # SDXL LoRA DreamBooth - coffeedjimmy/corgy_dog_LoRA <Gallery /> ## Model description These are coffeedjimmy/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](coffeedjimmy/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
WYNN747/Burmese-GPT-main-v7-1k
WYNN747
2024-01-17T04:16:15Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T03:01:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/stable-code-3b-mlx
mlx-community
2024-01-17T04:07:01Z
15
1
transformers
[ "transformers", "stablelm_epoch", "text-generation", "causal-lm", "code", "mlx", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/the-stack-github-issues", "dataset:bigcode/commitpackft", "dataset:bigcode/starcoderdata", "dataset:EleutherAI/proof-pile-2", "dataset:meta-math/MetaMathQA", "license:other", "model-index", "autotrain_compatible", "region:us" ]
text-generation
2024-01-17T03:38:22Z
--- language: - en license: other library_name: transformers tags: - causal-lm - code - mlx datasets: - tiiuae/falcon-refinedweb - bigcode/the-stack-github-issues - bigcode/commitpackft - bigcode/starcoderdata - EleutherAI/proof-pile-2 - meta-math/MetaMathQA metrics: - code_eval model-index: - name: StarCoderBase-3B results: - task: type: text-generation dataset: name: MultiPL-HumanEval (Python) type: nuprl/MultiPL-E metrics: - type: pass@1 value: 32.4 name: pass@1 verified: false - type: pass@1 value: 30.9 name: pass@1 verified: false - type: pass@1 value: 32.1 name: pass@1 verified: false - type: pass@1 value: 32.1 name: pass@1 verified: false - type: pass@1 value: 24.2 name: pass@1 verified: false - type: pass@1 value: 23.0 name: pass@1 verified: false --- # mlx-community/stable-code-3b-mlx This model was converted to MLX format from [`stabilityai/stable-code-3b`](). Refer to the [original model card](https://huggingface.co/stabilityai/stable-code-3b) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/stable-code-3b-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T04:05:45Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "migtissera/SynthIA-7B-v1.5", "pytorch", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T04:00:31Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - migtissera/SynthIA-7B-v1.5 - transformers - pytorch - mistral - text-generation - en - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1 SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [migtissera/SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: migtissera/SynthIA-7B-v1.5 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LoneStriker/DaringLotus-10.7B-8.0bpw-h8-exl2
LoneStriker
2024-01-17T03:55:17Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Solar", "Mistral", "Roleplay", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T03:50:48Z
--- license: apache-2.0 tags: - Solar - Mistral - Roleplay --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/bjOB_8BsqVteKKxARPc13.png) I managed to do a heavy density DARE TIES merge of SnowLotus and it's parent models (unusual strategy I know) that seems okay (prose not too bad, not incoherent). Early impressions are that this has slightly different prose - maybe a touch more GPT in there, as it talks of connections, but not at all to the degree that many more synthetically based models do. You probably will find that unobtrusive. Like it's sister model it can and does take lore, character cards and in context chat at times and creates with it, and is very descriptive. I cannot tell which is more coherent - occasionally they both get confused (as is typical with smaller models particularly onces with better prose). I did notice that when in particular contexts, SnowLotus' tendancy for exagerated excalation seemed stronger with this model. So there are differences (some prose and tone differences at least), and testin will probably tell which you prefer. They share more in common that they do differences - descriptive, fairly creative, occassionally confused but also sometimes surprisingly bright. And the prose has lots of similarities too, it's not generally your 'light, lyrical and poetic' affair. Summary at least so far, is this one is _slightly_ more gptish in prose and more inclined to escalate scenarios and descriptions in a sort of enthusiastic manner. Both do feed a lot off context, so if you give them stuff they should not be mild or timid.
MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T03:53:53Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "maywell/Mini_Synatra_SFT", "pytorch", "ko", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational", "license:apache-2.0" ]
text-generation
2024-01-17T03:48:57Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - maywell/Mini_Synatra_SFT - transformers - pytorch - mistral - text-generation - ko - license:cc-by-sa-4.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1 Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [maywell/Mini_Synatra_SFT](https://huggingface.co/maywell/Mini_Synatra_SFT) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: maywell/Mini_Synatra_SFT layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LoneStriker/DaringLotus-10.7B-6.0bpw-h6-exl2
LoneStriker
2024-01-17T03:46:12Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Solar", "Mistral", "Roleplay", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T03:42:45Z
--- license: apache-2.0 tags: - Solar - Mistral - Roleplay --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/bjOB_8BsqVteKKxARPc13.png) I managed to do a heavy density DARE TIES merge of SnowLotus and it's parent models (unusual strategy I know) that seems okay (prose not too bad, not incoherent). Early impressions are that this has slightly different prose - maybe a touch more GPT in there, as it talks of connections, but not at all to the degree that many more synthetically based models do. You probably will find that unobtrusive. Like it's sister model it can and does take lore, character cards and in context chat at times and creates with it, and is very descriptive. I cannot tell which is more coherent - occasionally they both get confused (as is typical with smaller models particularly onces with better prose). I did notice that when in particular contexts, SnowLotus' tendancy for exagerated excalation seemed stronger with this model. So there are differences (some prose and tone differences at least), and testin will probably tell which you prefer. They share more in common that they do differences - descriptive, fairly creative, occassionally confused but also sometimes surprisingly bright. And the prose has lots of similarities too, it's not generally your 'light, lyrical and poetic' affair. Summary at least so far, is this one is _slightly_ more gptish in prose and more inclined to escalate scenarios and descriptions in a sort of enthusiastic manner. Both do feed a lot off context, so if you give them stuff they should not be mild or timid.
rifqiakram/project-scoring
rifqiakram
2024-01-17T03:36:27Z
0
0
sklearn
[ "sklearn", "project_scoring", "bina", "agrisparta", "tabular-classification", "en", "id", "dataset:rifqiakram/project-scoring-dataset", "license:apache-2.0", "region:us" ]
tabular-classification
2024-01-16T08:44:33Z
--- license: apache-2.0 datasets: - rifqiakram/project-scoring-dataset language: - en - id metrics: - f1 library_name: sklearn pipeline_tag: tabular-classification tags: - project_scoring - bina - agrisparta --- # Model Card for Model ID This model card aims to calculate Project Bina score ## Model Details ### Model Description - **Developed by:** [M. Rifqi Akram] - **Model type:** [Tabular Classification] - **Language(s) (NLP):** [Python] - **License:** [Apache license 2.0]
MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T03:33:49Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Dans-DiscountModels/Mistral-7b-FFT-Test3", "pytorch", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T03:28:50Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Dans-DiscountModels/Mistral-7b-FFT-Test3 - transformers - pytorch - mistral - text-generation - generated_from_trainer - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1 Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [Dans-DiscountModels/Mistral-7b-FFT-Test3](https://huggingface.co/Dans-DiscountModels/Mistral-7b-FFT-Test3) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: Dans-DiscountModels/Mistral-7b-FFT-Test3 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Marcus2112/q-Taxi-v3
Marcus2112
2024-01-17T03:32:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T03:31:00Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="koppelmann/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lordspline/ninja-test
lordspline
2024-01-17T03:31:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-17T03:31:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/DaringLotus-10.7B-4.0bpw-h6-exl2
LoneStriker
2024-01-17T03:29:23Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Solar", "Mistral", "Roleplay", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T03:26:58Z
--- license: apache-2.0 tags: - Solar - Mistral - Roleplay --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/bjOB_8BsqVteKKxARPc13.png) I managed to do a heavy density DARE TIES merge of SnowLotus and it's parent models (unusual strategy I know) that seems okay (prose not too bad, not incoherent). Early impressions are that this has slightly different prose - maybe a touch more GPT in there, as it talks of connections, but not at all to the degree that many more synthetically based models do. You probably will find that unobtrusive. Like it's sister model it can and does take lore, character cards and in context chat at times and creates with it, and is very descriptive. I cannot tell which is more coherent - occasionally they both get confused (as is typical with smaller models particularly onces with better prose). I did notice that when in particular contexts, SnowLotus' tendancy for exagerated excalation seemed stronger with this model. So there are differences (some prose and tone differences at least), and testin will probably tell which you prefer. They share more in common that they do differences - descriptive, fairly creative, occassionally confused but also sometimes surprisingly bright. And the prose has lots of similarities too, it's not generally your 'light, lyrical and poetic' affair. Summary at least so far, is this one is _slightly_ more gptish in prose and more inclined to escalate scenarios and descriptions in a sort of enthusiastic manner. Both do feed a lot off context, so if you give them stuff they should not be mild or timid.
liminerity/Mini-blurstral
liminerity
2024-01-17T03:25:01Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-v0.1", "liminerity/Blur-7b-slerp-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T02:22:48Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-v0.1 - liminerity/Blur-7b-slerp-v0.1 --- # Mini-blurstral broken Mini-blurstral is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [liminerity/Blur-7b-slerp-v0.1](https://huggingface.co/liminerity/Blur-7b-slerp-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 9] - model: liminerity/Blur-7b-slerp-v0.1 layer_range: [0, 9] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "liminerity/Mini-blurstral" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jeiku/Futadom_Mistral
jeiku
2024-01-17T03:24:18Z
40
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-01-17T03:23:31Z
--- library_name: peft base_model: models/TheBloke_Mistral-7B-Instruct-v0.2-GPTQ --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
jeiku/Humiliation_Mistral
jeiku
2024-01-17T03:22:34Z
48
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-01-17T03:20:58Z
--- library_name: peft base_model: models/TheBloke_Mistral-7B-Instruct-v0.2-GPTQ --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/DaringLotus-10.7B-3.0bpw-h6-exl2
LoneStriker
2024-01-17T03:21:07Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Solar", "Mistral", "Roleplay", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T03:19:13Z
--- license: apache-2.0 tags: - Solar - Mistral - Roleplay --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/bjOB_8BsqVteKKxARPc13.png) I managed to do a heavy density DARE TIES merge of SnowLotus and it's parent models (unusual strategy I know) that seems okay (prose not too bad, not incoherent). Early impressions are that this has slightly different prose - maybe a touch more GPT in there, as it talks of connections, but not at all to the degree that many more synthetically based models do. You probably will find that unobtrusive. Like it's sister model it can and does take lore, character cards and in context chat at times and creates with it, and is very descriptive. I cannot tell which is more coherent - occasionally they both get confused (as is typical with smaller models particularly onces with better prose). I did notice that when in particular contexts, SnowLotus' tendancy for exagerated excalation seemed stronger with this model. So there are differences (some prose and tone differences at least), and testin will probably tell which you prefer. They share more in common that they do differences - descriptive, fairly creative, occassionally confused but also sometimes surprisingly bright. And the prose has lots of similarities too, it's not generally your 'light, lyrical and poetic' affair. Summary at least so far, is this one is _slightly_ more gptish in prose and more inclined to escalate scenarios and descriptions in a sort of enthusiastic manner. Both do feed a lot off context, so if you give them stuff they should not be mild or timid.
OpenDILabCommunity/LunarLander-v2-EfficientZero
OpenDILabCommunity
2024-01-17T03:18:36Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "LunarLander-v2", "en", "arxiv:2310.08348", "license:apache-2.0", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T10:07:40Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - LunarLander-v2 benchmark_name: OpenAI/Gym/Box2d task_name: LunarLander-v2 pipeline_tag: reinforcement-learning model-index: - name: EfficientZero results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 163.44 +/- 97.96 name: mean_reward --- # Play **LunarLander-v2** with **EfficientZero** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This implementation applies **EfficientZero** to the OpenAI/Gym/Box2d **LunarLander-v2** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine). **LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348). ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env,video] pip3 install LightZero ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from lzero.agent import EfficientZeroAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = EfficientZeroAgent( env_id="LunarLander-v2", exp_name="LunarLander-v2-EfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from lzero.agent import EfficientZeroAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-EfficientZero") # Instantiate the agent agent = EfficientZeroAgent( env_id="LunarLander-v2", exp_name="LunarLander-v2-EfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from lzero.agent import EfficientZeroAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = EfficientZeroAgent(env_id="LunarLander-v2", exp_name="LunarLander-v2-EfficientZero") # Train the agent return_ = agent.train(step=int(20000000)) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="LunarLander-v2", algo_name="EfficientZero", github_repo_url="https://github.com/opendilab/LightZero", github_doc_model_url=None, github_doc_env_url=None, installation_guide=''' pip3 install DI-engine[common_env,video] pip3 install LightZero ''', usage_file_by_git_clone="./efficientzero/lunarlander_efficientzero_deploy.py", usage_file_by_huggingface_ding="./efficientzero/lunarlander_efficientzero_download.py", train_file="./efficientzero/lunarlander_efficientzero.py", repo_id="OpenDILabCommunity/LunarLander-v2-EfficientZero", platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)", model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).", create_repo=False ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'main_config': { 'exp_name': 'LunarLander-v2-EfficientZero', 'seed': 0, 'env': { 'env_id': 'LunarLander-v2', 'continuous': False, 'manually_discretization': False, 'collector_env_num': 8, 'evaluator_env_num': 3, 'n_evaluator_episode': 3, 'manager': { 'shared_memory': False } }, 'policy': { 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'model': { 'observation_shape': 8, 'action_space_size': 4, 'model_type': 'mlp', 'lstm_hidden_size': 256, 'latent_state_dim': 256, 'discrete_action_encoding_type': 'one_hot', 'res_connection_in_dynamics': True, 'norm_type': 'BN' }, 'use_rnd_model': False, 'sampled_algo': False, 'gumbel_algo': False, 'mcts_ctree': True, 'collector_env_num': 8, 'evaluator_env_num': 3, 'env_type': 'not_board_games', 'action_type': 'fixed_action_space', 'battle_mode': 'play_with_bot_mode', 'monitor_extra_statistics': True, 'game_segment_length': 200, 'transform2string': False, 'gray_scale': False, 'use_augmentation': False, 'augmentation': ['shift', 'intensity'], 'ignore_done': False, 'update_per_collect': 200, 'model_update_ratio': 0.1, 'batch_size': 256, 'optim_type': 'Adam', 'learning_rate': 0.003, 'target_update_freq': 100, 'target_update_freq_for_intrinsic_reward': 1000, 'weight_decay': 0.0001, 'momentum': 0.9, 'grad_clip_value': 0.5, 'n_episode': 8, 'num_simulations': 50, 'discount_factor': 0.997, 'td_steps': 5, 'num_unroll_steps': 5, 'reward_loss_weight': 1, 'value_loss_weight': 0.25, 'policy_loss_weight': 1, 'policy_entropy_loss_weight': 0, 'ssl_loss_weight': 2, 'lr_piecewise_constant_decay': False, 'threshold_training_steps_for_final_lr': 50000, 'manual_temperature_decay': False, 'threshold_training_steps_for_final_temperature': 100000, 'fixed_temperature_value': 0.25, 'use_ture_chance_label_in_chance_encoder': False, 'use_priority': True, 'priority_prob_alpha': 0.6, 'priority_prob_beta': 0.4, 'root_dirichlet_alpha': 0.3, 'root_noise_weight': 0.25, 'random_collect_episode_num': 0, 'eps': { 'eps_greedy_exploration_in_collect': False, 'type': 'linear', 'start': 1.0, 'end': 0.05, 'decay': 100000 }, 'cfg_type': 'EfficientZeroPolicyDict', 'lstm_horizon_len': 5, 'reanalyze_ratio': 0.0, 'eval_freq': 1000, 'replay_buffer_size': 1000000 }, 'wandb_logger': { 'gradient_logger': False, 'video_logger': False, 'plot_logger': False, 'action_logger': False, 'return_logger': False } }, 'create_config': { 'env': { 'type': 'lunarlander', 'import_names': ['zoo.box2d.lunarlander.envs.lunarlander_env'] }, 'env_manager': { 'type': 'subprocess' }, 'policy': { 'type': 'efficientzero', 'import_names': ['lzero.policy.efficientzero'] } } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](<TODO>) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/LightZero) - **Doc**: [Algorithm link](<TODO>) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-EfficientZero/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-EfficientZero/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 17535.39 KB - **Last Update Date:** 2024-01-17 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** LunarLander-v2 - **Gym version:** 0.25.1 - **DI-engine version:** v0.5.0 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [Environments link](<TODO>)
MaziyarPanahi/Mistral-7B-golden-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T03:13:07Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "liuda1/Mistral-7B-golden", "pytorch", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational", "license:apache-2.0" ]
text-generation
2024-01-17T03:07:59Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - liuda1/Mistral-7B-golden - transformers - pytorch - mistral - text-generation - license:unknown - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # Mistral-7B-golden-Mistral-7B-Instruct-v0.1 Mistral-7B-golden-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [liuda1/Mistral-7B-golden](https://huggingface.co/liuda1/Mistral-7B-golden) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: liuda1/Mistral-7B-golden layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-7B-golden-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MaziyarPanahi/mistral-7b-slimorcaboros-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T03:03:43Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "openaccess-ai-collective/mistral-7b-slimorcaboros", "pytorch", "en", "dataset:Open-Orca/SlimOrca", "dataset:jondurbin/airoboros-3.1", "dataset:riddle_sense", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational" ]
text-generation
2024-01-17T02:58:27Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - openaccess-ai-collective/mistral-7b-slimorcaboros - transformers - pytorch - mistral - text-generation - en - dataset:Open-Orca/SlimOrca - dataset:jondurbin/airoboros-3.1 - dataset:riddle_sense - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # mistral-7b-slimorcaboros-Mistral-7B-Instruct-v0.1 mistral-7b-slimorcaboros-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [openaccess-ai-collective/mistral-7b-slimorcaboros](https://huggingface.co/openaccess-ai-collective/mistral-7b-slimorcaboros) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: openaccess-ai-collective/mistral-7b-slimorcaboros layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/mistral-7b-slimorcaboros-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
WIZard4332/phanteon
WIZard4332
2024-01-17T03:02:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-17T03:02:13Z
--- license: creativeml-openrail-m ---
futrx/fullstop-punctuation-multilang-large
futrx
2024-01-17T02:58:19Z
8
0
transformers.js
[ "transformers.js", "onnx", "xlm-roberta", "token-classification", "license:mit", "region:us" ]
token-classification
2024-01-17T00:28:46Z
--- library_name: transformers.js license: mit pipeline_tag: token-classification --- https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large with ONNX weights to be compatible with Transformers.js. *Quantized version is available for wasm
appvoid/palmer-002-2401
appvoid
2024-01-17T02:53:59Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "dataset:appvoid/no-prompt-50k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-16T21:33:21Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation datasets: - appvoid/no-prompt-50k --- ![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/new-logo.jpg) # palmer ### a better base model This is a small improvement over a (now un-prompted zyte) tinyllama model ### evaluation 🧪 note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals ``` model ARC-C OBQA HellaSwag PIQA Winogrande Average tinyllama | 0.3029 | 0.3600 | 0.5935 | 0.7329 | 0.5959 | 0.5170 | palmer-002 | 0.3242 | 0.3700 | 0.5956 | 0.7345 | 0.5888 | 0.5226 | palmer-002-2401 | 0.3294 | 0.3700 | 0.5950 | 0.7399 | 0.5896 | 0.5247 | (this) babbage-002 | 0.3285 | 0.3620 | 0.6380 | 0.7606 | 0.6085 | 0.5395 | ``` ### training 🦾 Training took ~1 A100 gpu hour. It was trained on 50,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible. ### prompt 📝 ``` no prompt 🚀 ``` <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>
MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T02:51:05Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "teknium/CollectiveCognition-v1.1-Mistral-7B", "pytorch", "mistral-7b", "instruct", "finetune", "gpt4", "synthetic data", "distillation", "sharegpt", "en", "dataset:CollectiveCognition/chats-data-2023-09-27", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational" ]
text-generation
2024-01-17T02:46:06Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - teknium/CollectiveCognition-v1.1-Mistral-7B - transformers - pytorch - mistral - text-generation - mistral-7b - instruct - finetune - gpt4 - synthetic data - distillation - sharegpt - en - dataset:CollectiveCognition/chats-data-2023-09-27 - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1 CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: teknium/CollectiveCognition-v1.1-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Agreene5/Rhythm_Heaven_Style_LoRA
Agreene5
2024-01-17T02:44:33Z
0
0
null
[ "region:us" ]
null
2024-01-15T19:05:37Z
![](https://huggingface.co/Agreene5/Rhythm_Heaven_Style_LoRA/resolve/main/CivitAIExamples2/formodelcard.png "3 example images") # Rhythm Heaven Style LoRA for Stable Diffusion 1.5 + SDXL Model is also on CivitAI: https://civitai.com/models/87254?modelVersionId=258514 ## Model Details ### Version 1 parameters: steps_per_image: 50 total_images: 49 total_steps: ~2400 training_model: Anything_V3 network_dim: 128 network_alpha: 128 network_train_on: both learning_rate: 1e-4 unet_lr: 0 text_encoder _lr: 5e-5 lr_scheduler: constant lr_scheduler_num_cycles: 1 lr_scheduler_power: 1 train_batch_size: 6 num_epochs: 6 mixed_precision: fp16 save_precision fp16 save_n_epochs_type: save_every_n_epochs save_n_epochs_type_value: 1 resolution: 512 max_token_length: 225 clip_skip: 2 additional_argument: --shuffle_caption --xformers training_hardware: Google Colab Free Tier: Nvidia Tesla T4 GPU training_time: ~45 minutes ### Version 1.1 parameters: steps_per_image: 20 total_images: 122 (61 unique images, doubled amount by mirroring them) total_steps: 2440 training_model: Any_LoRA optimizer: AdamW network_dim: 128 network_alpha: 128 network_train_on: both learning_rate: 1e-4 unet_lr: 1e-4 text_encoder _lr: 5e-5 lr_scheduler: constant lr_scheduler_num_cycles: 1 lr_scheduler_power: 1 train_batch_size: 8 num_epochs: 6 mixed_precision: bf16 save_precision bf16 save_n_epochs_type: save_every_n_epochs save_n_epochs_type_value: 1 resolution: 768 max_token_length: 225 clip_skip: 2 additional_argument: --xformers training_hardware: RTX 3090 training_time: ~1.5 hours (I don't remember exactly) #### Version 1.1 Improvements: **Better style consistency**: The model generates in a style closer to the Rhythm Heaven series much more consistently. 1.0 generated a bit more of a detailed style though so if that's what you want you should use that one. **Removed "rhythm_heaven" trigger**: Seems like a style trigger isn't really necessary, removing it just saves a bit of token length. **Less unprompted black and white generations**: This one isn't as big but I manually added color to some of the training images to get more variety which consequently means you'll get less black and white generations. ### Version 1 (SDXL) parameters: steps_per_image: 20 total_images: 122 (61 unique images, doubled amount by mirroring them) total_steps: 7320 training_model: anima_pencil-XL optimizer: Adafactor network_dim: 128 network_alpha: 1 network_train_on: both learning_rate: 1.2e-3 unet_lr: 1.2e-3 text_encoder _lr: 1.2e-3 lr_scheduler: constant lr_scheduler_num_cycles: 1 lr_scheduler_power: 1 train_batch_size: 5 num_epochs: 15 mixed_precision: bf16 save_precision bf16 save_n_epochs_type: save_every_n_epochs save_n_epochs_type_value: 1 resolution: 1024 max_token_length: 75 clip_skip: 2 additional_argument: --xformers training_hardware: RTX 3090 training_time: ~6 hours #### Version 1 (SDXL) Improvements: **Cleaner looking images**: All of the images used to train this model were upscaled 2x so outputs are less grainy. **Better prompt understanding**: SDXL has a better understanding of prompts so training a LoRA using it as a base makes the LoRA get a better understanding too. ## Model Description Trained on humanoid characters from the Rhythm Heaven series (and some from Wario Ware) using AnyLoRA. Captions were done manually using booru tags. - **Model type:** Standard LoRA - **Finetuned from model:** Stable Diffusion 1.5 based models ## Uses Used in conjunction with a booru based Stable Diffusion 1.5 model (ex. Any_LoRA) to emulate the style of the Rhythm_Heaven series. I recommend using it with a weight around 0.7 when prompting. Also, another reminder, this model was trained exclusively with booru tags so I'm not sure how well it'll work using blip captions.
mlx-community/NeuralBeagle14-7B-4bit-mlx
mlx-community
2024-01-17T02:38:28Z
19
4
mlx
[ "mlx", "mistral", "merge", "mergekit", "lazymergekit", "fblgit/UNA-TheBeagle-7b-v1", "argilla/distilabeled-Marcoro14-7B-slerp", "dpo", "rlhf", "license:apache-2.0", "region:us" ]
null
2024-01-17T01:25:32Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - fblgit/UNA-TheBeagle-7b-v1 - argilla/distilabeled-Marcoro14-7B-slerp - dpo - rlhf - mlx --- # mlx-community/NeuralBeagle14-7B-4bit-mlx This model was converted to MLX format from [`mlabonne/NeuralBeagle14-7B`](). Refer to the [original model card](https://huggingface.co/mlabonne/NeuralBeagle14-7B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/NeuralBeagle14-7B-4bit-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
ifuseok/sft-solar-10.7b-v1
ifuseok
2024-01-17T02:29:43Z
2,267
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:nlpai-lab/databricks-dolly-15k-ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "dataset:KETI-AIR/kor_boolq", "dataset:heegyu/open-korean-instructions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T07:22:27Z
--- language: - en pipeline_tag: text-generation datasets: - nlpai-lab/databricks-dolly-15k-ko - kyujinpy/KOR-OpenOrca-Platypus-v3 - KETI-AIR/kor_boolq - heegyu/open-korean-instructions --- **Input** Models input text only. **Output** Models generate text only. **Base Model** [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) **Training Dataset** - [nlpai-lab/databricks-dolly-15k-ko](https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko) - [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) - [heegyu/open-korean-instructions](heegyu/open-korean-instructions) - [KETI-AIR/kor_boolq](https://huggingface.co/datasets/KETI-AIR/kor_boolq) - [AIhub 영한 번역 데이터 일부](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71593) # Implementation Code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "ifuseok/sft-solar-10.7b-v1" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` # Prompt Example ``` ### System: 시스템 메시지 입니다. ### User: 유저 입니다. ### Assistant 어시스턴트 입니다. ```
Ricardo54321/dqn-SpaceInvadersNoFrameskip-v4
Ricardo54321
2024-01-17T02:28:11Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-17T02:26:54Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 546.00 +/- 261.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ricardo54321 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ricardo54321 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ricardo54321 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
husse408/GenPT
husse408
2024-01-17T02:24:42Z
0
0
null
[ "license:cc-by-4.0", "region:us" ]
null
2024-01-14T21:39:33Z
--- license: cc-by-4.0 --- This repository is hosting models and datasets of GenPT-GPT "Generating Pretrained Trajectories using GPT" research.
etri-xainlp/llama2-12.8b_lora-dpo_v1
etri-xainlp
2024-01-17T02:22:50Z
130
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T01:58:46Z
--- license: apache-2.0 --- # etri-xainlp/llama2-12.8b_lora-dpo_v1 ## Model Details **Model Developers** ETRI xainlp team **Input** text only. **Output** text only. **Model Architecture** **Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) **Training Dataset** - sft+lora: 710k instruction-following set - dpo+lora: 90k user preference set - We use A100 GPU 80GB * 8, when training.
Loyola/Mistral-7b-ITmodel
Loyola
2024-01-17T02:22:43Z
2,366
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "dataset:nlpai-lab/kullm-v2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T22:55:28Z
--- datasets: - nlpai-lab/kullm-v2 language: - en - ko license: apache-2.0 pipeline_tag: text-generation --- ## Model Details * **Base Model**: [Mistral-7B-Instruct-v0.2](mistralai/Mistral-7B-Instruct-v0.2) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) ## Dataset Details * Dataset : nlpai-lab/kullm-v2 ### Prompt Template - Mistral Prompt Template
apexmin/personal
apexmin
2024-01-17T02:02:26Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-17T01:25:29Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - apexmin/personal This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
zaq-hack/Noromaid-13B-0.4-DPO-bpw600-h6-exl2
zaq-hack
2024-01-17T01:50:24Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T00:45:10Z
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Use these presets in sillytavern!! [Context](https://files.catbox.moe/frkt0n.json) [Instruct](https://files.catbox.moe/zl01ev.json) <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Noromaid-13b-v0.4-DPO. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13B-0.4-DPO) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13B-0.4-DPO-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt format: NsChatml ``` <|im_system|> {sysprompt}<|im_end|> <|im_user|> {input}<|im_end|> <|im_bot|> {output}<|im_end|> ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! - [Another private Aesir dataset] - [Another private Aesir dataset] - [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) ## DPO training data used: - [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - [NobodyExistsOnTheInternet/ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa) - [Undi95/toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning) This is a full finetune. ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
janhq/llamacorn-1.1b-GGUF
janhq
2024-01-17T01:40:32Z
5
1
null
[ "gguf", "alignment-handbook", "generated_from_trainer", "trl", "sft", "dataset:jan-hq/bagel_sft_binarized", "dataset:jan-hq/dolphin_binarized", "dataset:jan-hq/openhermes_binarized", "base_model:jan-hq/LlamaCorn-1.1B", "base_model:quantized:jan-hq/LlamaCorn-1.1B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-17T01:39:18Z
--- license: apache-2.0 base_model: jan-hq/LlamaCorn-1.1B tags: - alignment-handbook - generated_from_trainer - trl - sft - generated_from_trainer datasets: - jan-hq/bagel_sft_binarized - jan-hq/dolphin_binarized - jan-hq/openhermes_binarized model-index: - name: LlamaCorn-sft-adapter results: [] model_creator: jan-hq model_name: LlamaCorn-1.1B quantized_by: JanHQ --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a> - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This is a GGUF version of [jan-hq/LlamaCorn-1.1B](https://huggingface.co/jan-hq/LlamaCorn-1.1B) - Model creator: [jan-hq](https://huggingface.co/jan-hq) - Original model: [LlamaCorn-1.1B](https://huggingface.co/jan-hq/LlamaCorn-1.1B) - Model description: [Readme](https://huggingface.co/jan-hq/LlamaCorn-1.1B/blob/main/README.md) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Converter This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
homie246/Chipflake
homie246
2024-01-17T01:36:30Z
0
0
null
[ "license:other", "region:us" ]
null
2024-01-17T01:36:29Z
--- license: other license_name: the-chipflake license_link: LICENSE ---
MaziyarPanahi/Metis-0.4-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T01:34:29Z
25
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Mihaiii/Metis-0.4", "base_model:Mihaiii/Metis-0.3", "license:apache-2.0", "autotrain_compatible", "region:us", "conversational", "endpoints_compatible" ]
text-generation
2024-01-17T01:29:19Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Mihaiii/Metis-0.4 - transformers - safetensors - mistral - text-generation - merge - base_model:Mihaiii/Metis-0.3 - license:apache-2.0 - autotrain_compatible - text-generation-inference - region:us --- # Metis-0.4-Mistral-7B-Instruct-v0.1 Metis-0.4-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [Mihaiii/Metis-0.4](https://huggingface.co/Mihaiii/Metis-0.4) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: Mihaiii/Metis-0.4 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Metis-0.4-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mrbmaryam/Yarn-Mistral-7b-128k_Fine-Tuned4LogParsing-r1
mrbmaryam
2024-01-17T01:33:18Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T01:28:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/Nous-Hermes-2-Mixtral-8x7B-DPO-exl2
bartowski
2024-01-17T01:30:15Z
0
0
null
[ "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "text-generation", "en", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "region:us" ]
text-generation
2024-01-17T00:23:05Z
--- base_model: mistralai/Mixtral-8x7B-v0.1 tags: - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mixtral-8x7B-DPO results: [] license: apache-2.0 language: - en quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Nous-Hermes-2-Mixtral-8x7B-DPO Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization. # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using the default calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Nous-Hermes-2-Mixtral-8x7B-DPO-exl2`: ```shell mkdir Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 huggingface-cli download bartowski/Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 --local-dir Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 huggingface-cli download bartowski/Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 --revision 4_0 --local-dir Nous-Hermes-2-Mixtral-8x7B-DPO-exl2 --local-dir-use-symlinks False ```
pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1
pinkyponky
2024-01-17T01:15:53Z
1,372
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T15:10:14Z
--- license: cc-by-nc-4.0 --- Description to load and test will be added soon. More details on training and data will be added aswell. ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1") model = AutoModelForCausalLM.from_pretrained( "Upstage/SOLAR-10.7B-v1.0", device_map="auto", torch_dtype=torch.bfloat16, ) ``` ### **Generating Text** To generate text, use the following Python code: ```python text = "Hi, my name is " inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=64) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
MaziyarPanahi/dolphin-2.0-mistral-7b-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T01:11:51Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "cognitivecomputations/dolphin-2.0-mistral-7b", "pytorch", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational" ]
text-generation
2024-01-17T01:06:41Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - cognitivecomputations/dolphin-2.0-mistral-7b - transformers - pytorch - mistral - text-generation - en - dataset:ehartford/dolphin - dataset:jondurbin/airoboros-2.2.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # dolphin-2.0-mistral-7b-Mistral-7B-Instruct-v0.1 dolphin-2.0-mistral-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [cognitivecomputations/dolphin-2.0-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.0-mistral-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.0-mistral-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/dolphin-2.0-mistral-7b-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T00:57:05Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "cognitivecomputations/samantha-mistral-7b", "pytorch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T00:51:54Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - cognitivecomputations/samantha-mistral-7b - transformers - pytorch - mistral - text-generation - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us --- # samantha-mistral-7b-Mistral-7B-Instruct-v0.1 samantha-mistral-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: cognitivecomputations/samantha-mistral-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
CLMBR/old-rel-cl-lstm-2
CLMBR
2024-01-17T00:49:23Z
8
0
transformers
[ "transformers", "pytorch", "rnn", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-12T16:18:54Z
--- tags: - generated_from_trainer model-index: - name: rel-cl-lstm-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rel-cl-lstm-2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.8008 | 0.03 | 76319 | 4.7627 | | 4.5151 | 0.03 | 152638 | 4.4816 | | 4.374 | 0.03 | 228957 | 4.3478 | | 4.2862 | 1.03 | 305276 | 4.2645 | | 4.2226 | 0.03 | 381595 | 4.2082 | | 4.1691 | 1.03 | 457914 | 4.1669 | | 4.1333 | 0.03 | 534233 | 4.1361 | | 4.1034 | 1.03 | 610552 | 4.1115 | | 4.0735 | 0.03 | 686871 | 4.0929 | | 4.051 | 1.03 | 763190 | 4.0768 | | 4.0334 | 0.03 | 839509 | 4.0648 | | 4.0153 | 1.03 | 915828 | 4.0532 | | 3.9936 | 0.03 | 992147 | 4.0440 | | 3.9834 | 0.03 | 1068466 | 4.0364 | | 3.9681 | 1.03 | 1144785 | 4.0294 | | 3.9586 | 0.03 | 1221105 | 4.0229 | | 3.9442 | 1.03 | 1297425 | 4.0173 | | 3.9351 | 0.03 | 1373745 | 4.0124 | | 3.9238 | 1.03 | 1450065 | 4.0085 | | 3.9209 | 0.03 | 1526385 | 4.0051 | | 3.9142 | 1.03 | 1602705 | 4.0024 | | 3.9116 | 0.03 | 1679025 | 3.9996 | | 3.9073 | 1.03 | 1755345 | 3.9973 | | 3.9009 | 0.03 | 1831665 | 3.9954 | | 3.8922 | 1.03 | 1907985 | 3.9933 | | 3.8829 | 0.03 | 1984305 | 3.9910 | | 3.8762 | 1.03 | 2060625 | 3.9890 | | 3.8746 | 0.03 | 2136945 | 3.9878 | | 3.8673 | 1.03 | 2213265 | 3.9862 | | 3.8607 | 0.03 | 2289585 | 3.9850 | | 3.8607 | 0.03 | 2365905 | 3.9843 | | 3.8592 | 0.03 | 2442225 | 3.9831 | | 3.8521 | 1.03 | 2518545 | 3.9822 | | 3.8487 | 0.03 | 2594865 | 3.9816 | | 3.8455 | 1.03 | 2671185 | 3.9811 | | 3.846 | 0.03 | 2747505 | 3.9803 | | 3.846 | 1.03 | 2823825 | 3.9796 | | 3.846 | 0.03 | 2900145 | 3.9794 | | 3.8496 | 0.03 | 2976465 | 3.9789 | | 3.8456 | 1.02 | 3052726 | 3.9785 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
MaziyarPanahi/zephyr-beta-math-Mistral-7B-Instruct-v0.1
MaziyarPanahi
2024-01-17T00:47:42Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "abhishek/zephyr-beta-math", "pytorch", "tensorboard", "autotrain", "license:other", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "conversational", "license:apache-2.0" ]
text-generation
2024-01-17T00:42:38Z
--- license: apache-2.0 tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - abhishek/zephyr-beta-math - transformers - pytorch - tensorboard - mistral - text-generation - autotrain - license:other - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- # zephyr-beta-math-Mistral-7B-Instruct-v0.1 zephyr-beta-math-Mistral-7B-Instruct-v0.1 is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [abhishek/zephyr-beta-math](https://huggingface.co/abhishek/zephyr-beta-math) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.1 layer_range: [0, 32] - model: abhishek/zephyr-beta-math layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/zephyr-beta-math-Mistral-7B-Instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```